Can this be considered a pure function (functional programming)? - functional-programming

I've been reading about functional programming and its concepts. It's clear to me that when working in big projects you always need to mix (at some adequate level) multiple paradigms such as OO and functional. In theory, concepts such as function purity are too strict such as
The function always evaluates the same result value given the same argument value(s). The function result value cannot depend on any hidden information or state that may change while program execution proceeds or between different executions of the program, nor can it depend on any external input from I/O devices. (https://en.wikipedia.org/wiki/Pure_function)
That said, is this (or can be considered) code a pure function?
const externalVar = 10;
function timesTen(value) {
return externalVar * value;
}
I'm asking this because, in this case, the timesTen function will always return the same value for an input, and anyone can change the value of externalVar as this is a constant. However, this code breaks the rule of accessing external function's scope.

Yes. It is guaranteed to be pure.
The reason is that it only depends on bound and immutable free variables.
However, this code breaks the rule of accessing external function's
scope.
There is nothing in your quote that says you cannot access free variables. It says external input as reading from a file, network, etc not a free variable from a previous scope.
Even Haskell use global function names like foldr and it is a free variable in every function it is used and of course the result is pure.
Remember that functions by name is just variables. parseInt is a variable that points to a function so it would have been hard to make anything at all if every function you should use in another function be passed as parameter.
If you redefine parseInt to something that is not pure or during the duration of your program so that it works differently then no function calling it would be pure.
Function composition and partial evaluation work because they supply free variables. Its an essential method of abstraction in functional programming. eg.
function compose(f2, f1) {
return (...args) => f2(f1(...args));
}
function makeAdder(initialValue) {
return v => v + initialValue;
}
const add11 = compose(makeAdder(10), makeAdder(1));
add11(5); // ==> 16
This is pure. The closure variable / free variable f1, f2, initialValue never changes for the created functions. add11 is a pure function.
Now look at compose again. It looks pure but it can be tainted. If not both functions passed to it were pure the result isn't either.
OO can be purely functional too!
They can easily be combined by not mutating the objects you create.
class FunctionalNumber {
constructor(value) {
this.value = value;
}
add(fn) {
return new FunctionalNumber(this.value + fn.value);
}
sub(fn) {
return new FunctionalNumber(this.value - fn.value);
}
}
This class is purely functional.
In fact you can think of a method call like obj.someMethod(arg1, arg2) as a function call with obj as first argument someFunction(obj, arg1, arg2). It's only syntactic differences and if someFunction mutated obj you would have said it was not pure. This is how it is with someMethod and obj too.
You can make classes that work on large data structures that are functional, which means you never have to copy it before changing when doing a backtracking puzzle solver. A simple example is the pair in Haskell and Lisp. Here is one way to make it in JavaScript:
class Cons {
constructor(car, cdr) {
this.car = car;
this.cdr = cdr;
}
}
const lst = new Cons(1, new Cons(2, new Cons(3, null)));
const lst0 = new Cons(0, lst);
lst0 is lst but with a new element in front. lst0 reuses everything in lst. Everything from lists to binary trees can be made with this and you can make many sequential data structures with immutable binary trees. It's been around since the 50s.

I understand your opinion and totally agree with #Sylwester, but there's a point that is worth mention: with reflection external constant values can be modified and break the pureness of your function. We know that everything in IT can be hacked and we should not considere this over the concepts, but in practice we should have this clear in mind, that in this way functional pureness is unsound.

Related

`is pure` trait and default parameters

The following &greet function is pure, and can appropriately be marked with the is pure trait.
sub greet(Str:D $name) { say "Hello, $name" }
my $user = get-from-db('USER');
greet($user);
This one, however, is not:
sub greet {
my $name = get-from-db('USER');
say "Hello, $name"
}
greet($user);
What about this one, though?
sub greet(Str:D $name = get-from-db('USER')) { say "Hello, $name" }
greet();
From "inside" the function, it seems pure – when is parameters are bound to the same values, it always produces the same output, without side effects. But from outside the function, it seems impure – when called twice with the same argument, it can produce different return values. Which prospective does Raku/Rakudo take?
There are at least two strategies a language might take when implementing default values for parameters:
Treat the parameter default value as something that the compiler, upon encountering a call without enough arguments, should emit at the callsite in order to produce the extra argument to pass to the callee. This means that it's possible to support default values for parameters without any explicit support for it in the calling conventions. This also, however, requires that you always know where the call is going at compile time (or at least know it accurately enough to insert the default value, and one can't expect to use different default values in method overrides in subclasses and have it work out).
Have a calling convention powerful enough that the callee can discover that a value was not passed for the parameter, and then compute the default value.
With its dynamic nature, only the second of these really makes sense for Raku, and so that is what it does.
In a language doing strategy 1 it could arguably make sense to mark such a function as pure, insofar as the code that calculates the default lives at each callsite, and so anything doing an analysis and perhaps transformation based upon the purity will already be having to deal with the code that evaluates the default value, and can see that it is not a source of a pure value.
Under strategy 2, and thus Raku, we should understand default values as an implementation detail of the block or routine that has the default in its signature. Thus if the code calculating the default value is impure, then the routine as a whole is impure, and so the is pure trait is not suitable.
More generally, the is pure trait is applicable if for a given argument capture we can always expect the same return value. In the example given, the argument capture \() contradicts this.
An alternative factoring here would be to use multi subs instead of parameter defaults, and to mark only one candidate with is pure.
When you say that a sub is pure, then the you are guaranteeing that any given input will always produce the same output. In your last example of sub greet it looks to me that you cannot guarantee that for the default value case, as the content of the database may change, or the get-from-db may have side-effects.
Of course, if you are sure that the database doesn't change, and there aren't any side-effects, you could still apply is pure to the sub, but why would you be using a database then?
Why would you mark a sub as is pure anyway? Well, it allows the compiler to constant-fold a call to a subroutine at compile time. Take e.g.:
sub foo($a) is pure {
2 * $a
}
say foo(21); # 42
If you look at the code that is generated for this:
$ raku --target=optimize -e 'sub foo($a) is pure { 2 * $a }; say foo(21)'
then you will see this near the end:
│ │ - QAST::IVal(42)
The 42 is the constant folded call for foo(21). So this way the entire call is optimized away, because the sub was marked is pure and the parameter you provided was a constant.

What's the difference between calling a function and functional programming

I'm referring to this article https://codeburst.io/a-beginner-friendly-intro-to-functional-programming-4f69aa109569. I've been confused with how functional programming differs from procedural programming for a while and want to understand it.
function getOdds2(arr){
return arr.filter(num => num % 2 !== 0)
}
"we simply define the outcome we want to see by using a method called filter, and allow the machine to take care of all the steps in between. This is a more declarative approach."
They say they define the outcome we want to see. How is that any different from procedural programming where you call a function (a sub routine) and get the output you want to see. How is this "declarative approach" any different than just calling a function. The only difference I see between functional and procedural programming is the style.
Procedural programming uses procedures (i.e. calls functions, aka subroutines) and potentially (refers to, and/or changes in place aka "mutates") lots of global variables and/or data structures, i.e. "state".
Functional programming seeks to have all the relevant state in the function's arguments and return values, to minimize if not outright forbid the state external to the functions used (called) in a program.
So, it seeks to have few (or none at all) global variables and data structures, just have functions (subroutines) passing data along until the final one produces the result.
Both are calling functions / subroutines, yes. That's just structured programming, though.
"Pure" functional programming doesn't want its functions to have (and maintain) any internal state (between the calls) either. That way the programming functions / subroutines / procedures become much like mathematical functions, always producing the same output for the same input(s). The operations of such functions become reliable, allowing for the more declarative style of programming. (when it might do something different each time it's called, there's no name we can give it, no concept to call it for).
With functional programming, everything revolves around functions. In the example you gave above, you are defining and executing a function without even knowing it.
You're passing a lambda (anonymous function) to the filter function. With functional programming, that's really the point - defining everything in terms of functions, passing functions as arguments to functions, etc.
So yes, it is about style. It's about ease of expressing your ideas in code and being additionally terse.
Take this for example:
function getOdds(array) {
odds = [];
for(var i = 0; i < array.length; i++) {
if(isOdd(array[i]) array.push(i);
}
}
function isOdd(n) {
return n % 2 != 0;
}
it gets the job done but it's really verbose. Now compare it to:
function getOdds(array) {
array.filter(n => n % 2 != 0)
}
Here you're defining the isOdd function anonymously as a lambda and passing it directly to the filter function. Saves a lot of keystrokes.

How can I modify a collection while also iterating over it?

I have a Board (a.k.a. &mut Vec<Vec<Cell>>) which I would like to update while iterating over it. The new value I want to update with is derived from a function which requires a &Vec<Vec<Cell>> to the collection I'm updating.
I have tried several things:
Use board.iter_mut().enumerate() and row.iter_mut().enumerate() so that I could update the cell in the innermost loop. Rust does not allow calling the next_gen function because it requires a &Vec<Vec<Cell>> and you cannot have a immutable reference when you already have a mutable reference.
Change the next_gen function signature to accept a &mut Vec<Vec<Cell>>. Rust does not allow multiple mutable references to an object.
I'm currently deferring all the updates to a HashMap and then applying them after I've performed my iteration:
fn step(board: &mut Board) {
let mut cells_to_update: HashMap<(usize, usize), Cell> = HashMap::new();
for (row_index, row) in board.iter().enumerate() {
for (column_index, cell) in row.iter().enumerate() {
let cell_next = next_gen((row_index, column_index), &board);
if *cell != cell_next {
cells_to_update.insert((row_index, column_index), cell_next);
}
}
}
println!("To Update: {:?}", cells_to_update);
for ((row_index, column_index), cell) in cells_to_update {
board[row_index][column_index] = cell;
}
}
Full source
Is there a way that I could make this code update the board "in place", that is, inside the innermost loop while still being able to call next_gen inside the innermost loop?
Disclaimer:
I'm learning Rust and I know this is not the best way to do this. I'm playing around to see what I can and cannot do. I'm also trying to limit any copying to restrict myself a little bit. As oli_obk - ker mentions, this implementation for Conway's Game of Life is flawed.
This code was intended to gauge a couple of things:
if this is even possible
if it is idiomatic Rust
From what I have gathered in the comments, it is possible with std::cell::Cell. However, using std:cell:Cell circumvents some of the core Rust principles, which I described as my "dilemma" in the original question.
Is there a way that I could make this code update the board "in place"?
There exists a type specially made for situations such as these. It's coincidentally called std::cell::Cell. You're allowed to mutate the contents of a Cell even when it has been immutably borrowed multiple times. Cell is limited to types that implement Copy (for others you have to use RefCell, and if multiple threads are involved then you must use an Arc in combination with somethinng like a Mutex).
use std::cell::Cell;
fn main() {
let board = vec![Cell::new(0), Cell::new(1), Cell::new(2)];
for a in board.iter() {
for b in board.iter() {
a.set(a.get() + b.get());
}
}
println!("{:?}", board);
}
It entirely depends on your next_gen function. Assuming we know nothing about the function except its signature, the easiest way is to use indices:
fn step(board: &mut Board) {
for row_index in 0..board.len() {
for column_index in 0..board[row_index].len() {
let cell_next = next_gen((row_index, column_index), &board);
if board[row_index][column_index] != cell_next {
board[row_index][column_index] = cell_next;
}
}
}
}
With more information about next_gen a different solution might be possible, but it sounds a lot like a cellular automaton to me, and to the best of my knowledge this cannot be done in an iterator-way in Rust without changing the type of Board.
You might fear that the indexing solution will be less efficient than an iterator solution, but you should trust LLVM on this. In case your next_gen function is in another crate, you should mark it #[inline] so LLVM can optimize it too (not necessary if everything is in one crate).
Not an answer to your question, but to your problem:
Since you are implementing Conway's Game of Life, you cannot do the modification in-place. Imagine the following pattern:
00000
00100
00100
00100
00000
If you update line 2, it will change the 1 in that line to a 0 since it has only two 1s in its neighborhood. This will cause the middle 1 to see only two 1s instead of the three that were there to begin with. Therefor you always need to either make a copy of the entire Board, or, as you did in your code, write all the changes to some other location, and splice them in after going through the entire board.

Any difference between First Class Function and High Order Function

I'm wondering whether/what difference between First Class Function and High Order Function.
I read through those two wiki pages and they looks rather similar.
If they talking about same, why need two terminologies?
Tried to google but have not found any useful thing.
There is a difference. When you say that a language has first-class functions, it means that the language treats functions as values – that you can assign a function into a variable, pass it around etc. Higher-order functions are functions that work on other functions, meaning that they take one or more functions as an argument and can also return a function.
The “higher-order” concept can be applied to functions in general, like functions in the mathematical sense. The “first-class” concept only has to do with functions in programming languages. It’s seldom used when referring to a function, such as “a first-class function”. It’s much more common to say that “a language has/hasn’t first-class function support”.
The two things are closely related, as it’s hard to imagine a language with first-class functions that would not also support higher-order functions, and conversely a language with higher-order functions but without first-class function support.
First class functions are functions that are treated like an object (or are assignable to a variable).
Higher order functions are functions that take at least one first class function as a parameter, or return at least one first class function.
They're different.
First class functions
Values in a language that are handled uniformly throughout are called "first class". They may be stored in data structures, passed as arguments, or used in control structures.
Languages which support values with function types, and treat them the same as non-function values, can be said to have "first class functions".
Higher order functions
One of the consequences of having first class functions is that you should be able to pass a function as an argument to another function. The latter function is now "higher order". It is a function that takes a function as an argument.
The canonical example is "map"
map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = f x : map f xs
That is, it takes a function, and an array, and returns a new array with the function applied to each element.
Functional languages -- languages where functions are the primary means of building programs -- all have first class functions. Most also have higher order functions (very rare exceptions being languages like Excel, which can be said to be functional, but not higher order).
In addition to the previous answers, note that a language with first-class functions automatically enables the expression of higher-order functions (because you can pass functions as parameters like any other value).
On the other hand, you can imagine languages that support higher-order functions, but do not make functions first-class (and where parameters that are functions are treated specially, and different from "ordinary" value parameters).
So the presence of first-class functions (as a language feature) implies the presence of higher-order functions, but not the other way round.
First Class functions can:
Be stored in variables
Be returned from a function.
Be passed as arguments into another function.
High Order Function is a function that returns another function.
For example:
function highOrderFunc() {
return function () {
alert('hello');
};
}
First class function:
When we pass function as an argument of another function called first class function.
The ability to use function as a values known as first class function.
var a = function(parameter){
return function(){
}
}
console.log(a())
Higher-Order Function:
A function that receives another function as an argument or that returns a new function or both is called Higher-order functions. Higher-order functions are only possible because of the First-class function.
First class functions mean everything you can do with other types(variables, booleans, numbers...), you can do it with functions.
For example Assign them to variables, pass it around, create them on fly.
First-class Function is a general term.
High Order Function is a variation of First-class Function, it's a subspecies of First-class Function.
High Order Function is a function that returns a function or takes other functions as arguments.
Example of High Order Function:
function sayHello() {
return () => {
console.log("Hello!");
};
}
But at the same, we can call this function like First-class Function. Only the answer will be not so clear as if you said the High Order Function, because it's a more concrete term.
function sayHello() {
return () => {
console.log("Hello!");
};
}
Resource: developer.mozilla
In easy words, there is a brand Bentley, it's a general term.
Bentley has models:
Continental (it's Bentley, it's just a more concrete term)
Flying Spur (it's Bentley, it's just a more concrete term)
Bentayga (it's Bentley, it's just a more concrete term)
A first-class function is a function that is treated as a variable. In addition, they can be used as arguments for other functions.
A function in JavaScript is treated as a first-class citizen. Therefore, functions are also objects simply because they are values.
As in JavaScript, functions are considered values or objects.
const Person = {
dance:(name) => {
return `${name} can dance`
},
walk:(name) => {
return `I am sure ${name} can walk `
},
}
console.log(Person.dance("John"));
console.log(Person.walk("John"));
In higher-order functions, other functions are taken as arguments (callbacks) or returned as results.
There is a lot of use for this concept in JavaScript.
Functions of higher order include:
map
sort
filter
map example-
without higher order
const numbers = [1, 2, 3, 4, 5];
function addOneMore(array, newArr = []) {
for (let i = 0; i < array.length; i++) {
newArr.push(array[i] + 1);
}
return newArr;
}
const outputData = addOneMore(numbers);
console.log(outputData);
with high order function, map()
const numbers = [1, 2, 3, 4, 5];
const outputData = numbers.map((number) => number + 1);
console.log(outputData);

"Closures are poor man's objects and vice versa" - What does this mean?

Closures are poor man's objects and vice versa.
I have seen this statement at many places on the web (including SO) but I don't quite understand what it means. Could someone please explain what it exactly means?
If possible, please include examples in your answer.
Objects are poor man's closures.
Consider Java. Java is an object-oriented programming language with no language level support for real lexical closures. As a work-around Java programmers use anonymous inner classes that can close over the variables available in lexical scope (provided they're final). In this sense, objects are poor man's closures.
Closures are poor man's objects.
Consider Haskell. Haskell is a functional language with no language level support for real objects. However they can be modeled using closures, as described in this excellent paper by Oleg Kiselyov and Ralf Lammel. In this sense, closures are poor man's objects.
If you come from an OO background, you'll probably find thinking in terms of objects more natural, and may therefore think of them as a more fundamental concept than closures. If you come from a FP background, you might find thinking in terms of closures more natural, and may therefore think of them as a more fundamental concept than objects.
Moral of the story is that closures and objects are ideas that are expressible in terms of each other, and none is more fundamental than the other. That's all there is to the statement under consideration.
In philosophy, this is referred to as model dependent realism.
The point is that closures and objects accomplish the same goal: encapsulation of data and/or functionality in a single, logical unit.
For example, you might make a Python class that represents a dog like this:
class Dog(object):
def __init__(self):
self.breed = "Beagle"
self.height = 12
self.weight = 15
self.age = 1
def feed(self, amount):
self.weight += amount / 5.0
def grow(self):
self.weight += 2
self.height += .25
def bark(self):
print "Bark!"
And then I instantiate the class as an object
>>> Shaggy = Dog()
The Shaggy object has data and functionality built in. When I call Shaggy.feed(5), he gains a pound. That pound is stored in variable that's stored as an attribute of the object, which more or less means that it's in the objects internal scope.
If I was coding some Javascript, I'd do something similar:
var Shaggy = function() {
var breed = "Beagle";
var height = 12;
var weight = 15;
var age = 1;
return {
feed : function(){
weight += amount / 5.0;
},
grow : function(){
weight += 2;
height += .25;
},
bark : function(){
window.alert("Bark!");
},
stats : function(){
window.alert(breed "," height "," weight "," age);
}
}
}();
Here, instead of creating a scope within an object, I've created a scope within a function and then called that function. The function returns a JavaScript object composed of some functions. Because those functions access data that was allocated in the local scope, the memory isn't reclaimed, allowing you to continue to use them through the interface provided by the closure.
An object, at its simplest, is just a collection of state and functions that operate on that state. A closure is also a collection of state and a function that operates on that state.
Let's say I call a function that takes a callback. In this callback, I need to operate on some state known before the function call. I can create an object that embodies this state ("fields") and contains a member function ("method") that performs as the callback. Or, I could take the quick and easy ("poor man's") route and create a closure.
As an object:
class CallbackState{
object state;
public CallbackState(object state){this.state = state;}
public void Callback(){
// do something with state
}
}
void Foo(){
object state = GenerateState();
CallbackState callback = new CallbackState(state);
PerformOperation(callback.Callback);
}
This is pseudo-C#, but is similar in concept to other OO languages. As you can see, there's a fair amount of boilerplate involved with the callback class to manage the state. This would be much simpler using a closure:
void Foo(){
object state = GenerateState();
PerformOperation(()=>{/*do something with state*/});
}
This is a lambda (again, in C# syntax, but the concept is similar in other languages that support closures) that gives us all the capabilities of the class, without having to write, use, and maintain a separate class.
You'll also hear the corollary: "objects are a poor man's closure". If I can't or won't take advantage of closures, then I am forced to do their work using objects, as in my first example. Although objects provide more functionality, closures are often a better choice where a closure will work, for the reasons already stated.
Hence, a poor man without objects can often get the job done with closures, and a poor man without closures can get the job done using objects. A rich man has both and uses the right one for each job.
EDITED: The title of the question does not include "vice versa" so I'll try not to assume the asker's intent.
The two common camps are functional vs imperative languages. Both are tools that can accomplish similar tasks in different ways with different sets of concerns.
Closures are poor man's objects.
Objects are poor man's closures.
Individually, each statement usually means the author has a some bias, one way or another, usually rooted in their comfort with one language or class of language vs discomfort with another. If not bias, they may be constrained with one environment or the other. The authors I read that say this sort of thing are usually the zealot, purist or language religious types. I avoid the language religious types if possible.
Closures are poor man's objects. Objects are poor man's closures.
The author of that is a "pragmatist" and also pretty clever. It means the author appreciates both points of view and appreciates they are conceptually one and the same. This is my sort of fellow.
Just so much sugar, as closures hide anonymous objects under their skirts.
"Objects are a poor man's closures" isn't just a statement of some theoretical equivalence — it's a common Java idiom. It's very common to use anonymous classes to wrap up a function that captures the current state. Here's how it's used:
public void foo() {
final String message = "Hey ma, I'm closed over!";
SwingUtilities.invokeLater(new Runnable() {
public void run() {
System.out.println(message);
}
});
}
This even looks a lot like the equivalent code using a closure in another language. For example, using Objective-C blocks (since Objective-C is reasonably similar to Java):
void foo() {
NSString *message = #"Hey ma, I'm closed over!";
[[NSOperationQueue currentQueue] addOperationWithBlock:^{
printf("%s\n", [message UTF8String]);
}];
}
The only real difference is that the functionality is wrapped in the new Runnable() anonymous class instance in the Java version.
That objects can be used as a replacement for closures is quite easy to understand, you just place the captured state in the object and the calling as a method. Indeed for example C++ lambda closures are implemented as objects (things are sort of tricky for C++ because the language doesn't provide garbage collection and true closures with mutable shared state are therefore hard to implement correctly because of the lifetime of captured context).
The opposite (closures can be used as objects) is less observed but it's IMO a very powerful technique... consider for example (Python)
def P2d(x, y):
def f(cmd, *args):
nonlocal x, y
if cmd == "x": return x
if cmd == "y": return y
if cmd == "set-x": x = args[0]
if cmd == "set-y": y = args[0]
return f
The function P2d returns a closure that captured the two values of x and y. The closure then provide access for reading and writing to them using a command. For example
p = P2d(10, 20)
p("x") # --> 10
p("set-x", 99)
p("x") # --> 99
so the closure is behaving like an object; moreover as any access is going through the command interface it's very easy to implement delegation, inheritance, computed attributes etc.
The nice book "Let Over Lambda" builds over this idea using Lisp as a language, but any language that supports closures can use this technique (in Lisp the advantage is that you can also bend the syntax using macros and read macros to improve usability and automatically generate all boilerplate code). The title of the book is exactly about this... a let wrapping a lambda:
(defun p2d (x y)
(let ((x x) (y y))
(lambda (cmd &rest args)
(cond
((eq cmd 'x) x)
((eq cmd 'y) y)
((eq cmd 'set-x) (setq x (first args)))
((eq cmd 'set-y) (setq y (first args)))))))
Actually I'm not sure I agree with the "poor" adjective in this approach.

Resources