Nested functions: Improper use of side-effects? - functional-programming

I'm learning functional programming, and have tried to solve a couple problems in a functional style. One thing I experienced, while dividing up my problem into functions, was it seemed I had two options: use several disparate functions with similar parameter lists, or using nested functions which, as closures, can simply refer to bindings in the parent function.
Though I ended up going with the second approach, because it made function calls smaller and it seemed to "feel" better, from my reading it seems like I may be missing one of the main points of functional programming, in that this seems "side-effecty"? Now granted, these nested functions cannot modify the outer bindings, as the language I was using prevents that, but if you look at each individual inner function, you can't say "given the same parameters, this function will return the same results" because they do use the variables from the parent scope... am I right?
What is the desirable way to proceed?
Thanks!

Functional programming isn't all-or-nothing. If nesting the functions makes more sense, I'd go with that approach. However, If you really want the internal functions to be purely functional, explicitly pass all the needed parameters into them.
Here's a little example in Scheme:
(define (foo a)
(define (bar b)
(+ a b)) ; getting a from outer scope, not purely functional
(bar 3))
(define (foo a)
(define (bar a b)
(+ a b)) ; getting a from function parameters, purely functional
(bar a 3))
(define (bar a b) ; since this is purely functional, we can remove it from its
(+ a b)) ; environment and it still works
(define (foo a)
(bar a 3))
Personally, I'd go with the first approach, but either will work equally well.

Nesting functions is an excellent way to divide up the labor in many functions. It's not really "side-effecty"; if it helps, think of the captured variables as implicit parameters.
One example where nested functions are useful is to replace loops. The parameters to the nested function can act as induction variables which accumulate values. A simple example:
let factorial n =
let rec facHelper p n =
if n = 1 then p else facHelper (p*n) (n-1)
in
facHelper 1 n
In this case, it wouldn't really make sense to declare a function like facHelper globally, since users shouldn't have to worry about the p parameter.
Be aware, however, that it can be difficult to test nested functions individually, since they cannot be referred to outside of their parent.

Consider the following (contrived) Haskell snippet:
putLines :: [String] -> IO ()
putLines lines = putStr string
where string = concat lines
string is a locally bound named constant. But isn't it also a function taking no arguments that closes over lines and is therefore referentially intransparent? (In Haskell, constants and nullary functions are indeed indistinguishable!) Would you consider the above code “side-effecty” or non-functional because of this?

Related

The place of closures in functional programming

I have watched the talk of Robert C Martin "Functional Programming; What? Why? When?"
https://www.youtube.com/watch?v=7Zlp9rKHGD4
The main message of this talk is that a state is unacceptable in functional programming.
Martin goes even further, claims that assigments are 'evil'.
So... keeping in mind this talk my question is, where is a place for closure in functional programming?
When there is no state or no variable in a functional code, what would be a main reason to create and use such closure (closure that does not enclose any state, any variable)? Is the closure mechanism useful?
Without a state or a variable, (maybe only with immutables ids), there is no need to reference to a current lexical scope (there is nothing that could be changed)?
In this approach, that is enough to use Java-like lambda mechanism, where there is no link to current lexical scope (that's why the variables have to be final).
In some sources, closures are meant to be a must have element of functional language.
A lexical scope that can be closed over does not need to be mutable to be useful. Just consider curried functions as an example:
add = \a -> \b -> a+b
add1 = add(1)
add3 = add(3)
[add1(0), add1(2), add3(2), add3(5)] // [1, 2, 5, 8]
Here, the inner lamba closes over the value of a (or over the variable a, which doesn't make a difference because of immutability).
Closures are not ultimately necessary for functional programming, but local variables are not either. Still, they're both very good ideas. Closures allow for a very simple notation of the most(?) important task of functional programming: to dynamically create new functions with specialised behaviour from an abstracted code.
You use closures as you would in a language with mutable variables. The difference is obviously that they (usually) can't be modified.
The following is a simple example, in Clojure (which ironically I'm writing with right now):
(let [a 10
f (fn [b]
(+ a b))]
(println (f 4))) ; Prints "14"
The main benefit to closures in a case like this is I can "partially apply" a function using a closure, then pass the partially applied function around, instead of needed to pass the non-applied function, and any data I'll need to call it (very useful, in many scenarios). In the below example, what if I didn't want to call the function right away? I would need to pass a with it so it's available when f is called.
But you could also add some mutability into the mix if you deemed it necessary (although, as #Bergi points out, this example is "evil"):
(let [a (atom 10) ; Atoms are mutable
f (fn [b]
(do
(swap! a inc) ; Increment a
(+ #a b)))]
(println (f 4)) ; Prints "15"
(println (f 4))); Prints "16"
In this way you can emulate static variables. You can use this to do cool things like define memoize. It uses a "static variable" to cache the input/output of referentially transparent functions. This increases memory use, but can save CPU time if used properly.
I have to disagree with being against the idea of having a state. States aren't evil; they're necessary. Every program has a state. Global, mutable states are evil.
Also note, you can have mutability, and still program functionally. Say I have a function, containing a map over a list. Also say, I need to maintain an accumulator while mapping. I really have 2 options (ignoring "doing it manually"):
Switch the map to a fold.
Create a mutable variable, and mutate it while mapping.
Although option one should be preferred, both these methods can be utilized during functional programming. From the view of "outside the function", there would be no difference, even if one version is internally using a mutable variable. The function can still be referentially transparent, and pure, since the only mutable state being affected is local to the function, and can't possibly effect anything outside.
Example code mutating a local variable:
(defn mut-fn [xs]
(let [a (atom 0)]
(map
(fn [x]
(swap! a inc) ; Increment a
(+ x #a)) ; Set the accumulator to x + a
xs)))
Note the variable a cannot be seen from outside the function, so any effect it has can in no way cause global changes. The function will produce the same output for each input, so it's effectively pure.

Recursively run through a vector in Clojure

I'm just starting to play with Clojure.
How do I run through a vector of items?
My naive recursive function would have a form like the classic map eg.
(defn map [f xs] (
(if (= xs [])
[]
(cons (f (first xs)) (map f (rest xs))
)
))
The thing is I can't find any examples of this kind of code on the web. I find a lot of examples using built-in sequence traversing functions like for, map and loop. But no-one doing the raw recursive version.
Is that because you SHOULDN'T do this kind of thing in Clojure? (eg. because it uses lower-level Java primitives that don't have tail-call optimisation or something?)?
When you say "run through a vector" this is quite vague; as Clojure is a lisp and thus specializes in sequence analysis and manipulation, the beauty of using this language is that you don't think in terms "run through a vector and then do something with each element," instead you'd more idiomatically say "pull this out of a vector" or "transform this vector into X" or "I want this vector to give me X".
It is because of this type of perspective in lisp languages that you will see so many examples and production code that doesn't just loop/recur through a vector but rather specifically goes after what is wanted in a short, idiomatic way. Using simple functions like reduce map filter for into and others allow you to elegantly move over a sequence such as a vector while simultaneously doing what you want with the contents. In most other languages, this would be at least 2 different parts: the loop, and then the actual logic to do what you want.
You'll often find that if you think about sequences using the more imperative idea you get with languages like C, C++, Java, etc, that your code is about 4x longer (at least) than it would otherwise be if you first thought about your plan in a more functional approach.
Clojure re-uses stack frames only with tail-recurstion and only when you use the explicit recur call. Everything else will be stack consuming. The above map example is not tail recursive because the cons happens after the recursive call so it can't be TCO'd in any language. If you switch it to use the continuation passing style and use an explicit call to recur instead of map then you should be good to go.

Fixed-Point Combinators

I am new to the world of fixed-point combinators and I guess they are used to recurse on anonymous lambdas, but I haven't really got to use them, or even been able to wrap my head around them completely.
I have seen the example in Javascript for a Y-combinator but haven't been able to successfully run it.
The question here is, can some one give an intuitive answer to:
What are Fixed-point combinators, (not just theoretically, but in context of some example, to reveal what exactly is the fixed-point in that context)?
What are the other kinds of fixed-point combinators, apart from the Y-combinator?
Bonus Points: If the example is not just in one language, preferably in Clojure as well.
UPDATE:
I have been able to find a simple example in Clojure, but still find it difficult to understand the Y-Combinator itself:
(defn Y [r]
((fn [f] (f f))
(fn [f]
(r (fn [x] ((f f) x))))))
Though the example is concise, I find it difficult to understand what is happening within the function. Any help provided would be useful.
Suppose you wanted to write the factorial function. Normally, you would write it as something like
function fact(n) = if n=0 then 1 else n * fact(n-1)
But that uses explicit recursion. If you wanted to use the Y-combinator instead, you could first abstract fact as something like
function factMaker(myFact) = lamba n. if n=0 then 1 else n * myFact(n-1)
This takes an argument (myFact) which it calls were the "true" fact would have called itself. I call this style of function "Y-ready", meaning it's ready to be fed to the Y-combinator.
The Y-combinator uses factMaker to build something equivalent to the "true" fact.
newFact = Y(factMaker)
Why bother? Two reasons. The first is theoretical: we don't really need recursion if we can "simulate" it using the Y-combinator.
The second is more pragmatic. Sometimes we want to wrap each function call with some extra code to do logging or profiling or memoization or a host of other things. If we try to do this to the "true" fact, the extra code will only be called for the original call to fact, not all the recursive calls. But if we want to do this for every call, including all the recursive call, we can do something like
loggingFact = LoggingY(factMaker)
where LoggingY is a modified version of the Y combinator that introduces logging. Notice that we did not need to change factMaker at all!
All this is more motivation why the Y-combinator matters than a detailed explanation from how that particular implementation of Y works (because there are many different ways to implement Y).
To answer your second question about fix-point combinators other than Y. There are countably infinitely many standard fix-point combinators, that is, combinators fix that satisfy the equation
fix f = f (fix f)
There are also contably many non-standard fix-point combinators, which satisfy the equation
fix f = f (f (fix f))
etc. Standard fix-point combinators are recursively enumerable, but non-standard are not. Please see the following web page for examples, references and discussion.
http://okmij.org/ftp/Computation/fixed-point-combinators.html#many-fixes

How does one implement a "stackless" interpreted language?

I am making my own Lisp-like interpreted language, and I want to do tail call optimization. I want to free my interpreter from the C stack so I can manage my own jumps from function to function and my own stack magic to achieve TCO. (I really don't mean stackless per se, just the fact that calls don't add frames to the C stack. I would like to use a stack of my own that does not grow with tail calls). Like Stackless Python, and unlike Ruby or... standard Python I guess.
But, as my language is a Lisp derivative, all evaluation of s-expressions is currently done recursively (because it's the most obvious way I thought of to do this nonlinear, highly hierarchical process). I have an eval function, which calls a Lambda::apply function every time it encounters a function call. The apply function then calls eval to execute the body of the function, and so on. Mutual stack-hungry non-tail C recursion. The only iterative part I currently use is to eval a body of sequential s-expressions.
(defun f (x y)
(a x y)) ; tail call! goto instead of call.
; (do not grow the stack, keep return addr)
(defun a (x y)
(+ x y))
; ...
(print (f 1 2)) ; how does the return work here? how does it know it's supposed to
; return the value here to be used by print, and how does it know
; how to continue execution here??
So, how do I avoid using C recursion? Or can I use some kind of goto that jumps across c functions? longjmp, perhaps? I really don't know. Please bear with me, I am mostly self- (Internet- ) taught in programming.
One solution is what is sometimes called "trampolined style". The trampoline is a top-level loop that dispatches to small functions that do some small step of computation before returning.
I've sat here for nearly half an hour trying to contrive a good, short example. Unfortunately, I have to do the unhelpful thing and send you to a link:
http://en.wikisource.org/wiki/Scheme:_An_Interpreter_for_Extended_Lambda_Calculus/Section_5
The paper is called "Scheme: An Interpreter for Extended Lambda Calculus", and section 5 implements a working scheme interpreter in an outdated dialect of Lisp. The secret is in how they use the **CLINK** instead of a stack. The other globals are used to pass data around between the implementation functions like the registers of a CPU. I would ignore **QUEUE**, **TICK**, and **PROCESS**, since those deal with threading and fake interrupts. **EVLIS** and **UNEVLIS** are, specifically, used to evaluate function arguments. Unevaluated args are stored in **UNEVLIS**, until they are evaluated and out into **EVLIS**.
Functions to pay attention to, with some small notes:
MLOOP: MLOOP is the main loop of the interpreter, or "trampoline". Ignoring **TICK**, its only job is to call whatever function is in **PC**. Over and over and over.
SAVEUP: SAVEUP conses all the registers onto the **CLINK**, which is basically the same as when C saves the registers to the stack before a function call. The **CLINK** is actually a "continuation" for the interpreter. (A continuation is just the state of a computation. A saved stack frame is technically continuation, too. Hence, some Lisps save the stack to the heap to implement call/cc.)
RESTORE: RESTORE restores the "registers" as they were saved in the **CLINK**. It's similar to restoring a stack frame in a stack-based language. So, it's basically "return", except some function has explicitly stuck the return value into **VALUE**. (**VALUE** is obviously not clobbered by RESTORE.) Also note that RESTORE doesn't always have to return to a calling function. Some functions will actually SAVEUP a whole new computation, which RESTORE will happily "restore".
AEVAL: AEVAL is the EVAL function.
EVLIS: EVLIS exists to evaluate a function's arguments, and apply a function to those args. To avoid recursion, it SAVEUPs EVLIS-1. EVLIS-1 would just be regular old code after the function application if the code was written recursively. However, to avoid recursion, and the stack, it is a separate "continuation".
I hope I've been of some help. I just wish my answer (and link) was shorter.
What you're looking for is called continuation-passing style. This style adds an additional item to each function call (you could think of it as a parameter, if you like), that designates the next bit of code to run (the continuation k can be thought of as a function that takes a single parameter). For example you can rewrite your example in CPS like this:
(defun f (x y k)
(a x y k))
(defun a (x y k)
(+ x y k))
(f 1 2 print)
The implementation of + will compute the sum of x and y, then pass the result to k sort of like (k sum).
Your main interpreter loop then doesn't need to be recursive at all. It will, in a loop, apply each function application one after another, passing the continuation around.
It takes a little bit of work to wrap your head around this. I recommend some reading materials such as the excellent SICP.
Tail recursion can be thought of as reusing for the callee the same stack frame that you are currently using for the caller. So you could just re-set the arguments and goto to the beginning of the function.

What are best practices for including parameters such as an accumulator in functions?

I've been writing more Lisp code recently. In particular, recursive functions that take some data, and build a resulting data structure. Sometimes it seems I need to pass two or three pieces of information to the next invocation of the function, in addition to the user supplied data. Lets call these accumulators.
What is the best way to organize these interfaces to my code?
Currently, I do something like this:
(defun foo (user1 user2 &optional acc1 acc2 acc3)
;; do something
(foo user1 user2 (cons x acc1) (cons y acc2) (cons z acc3)))
This works as I'd like it to, but I'm concerned because I don't really need to present the &optional parameters to the programmer.
3 approaches I'm somewhat considering:
have a wrapper function that a user is encouraged to use that immediately invokes the extended definiton.
use labels internally within a function whose signature is concise.
just start using a loop and variables. However, I'd prefer not since I'd like to really wrap my head around recursion.
Thanks guys!
If you want to write idiomatic Common Lisp, I'd recommend the loop and variables for iteration. Recursion is cool, but it's only one tool of many for the Common Lisper. Besides, tail-call elimination is not guaranteed by the Common Lisp spec.
That said, I'd recommend the labels approach if you have a structure, a tree for example, that is unavoidably recursive and you can't get tail calls anyway. Optional arguments let your implementation details leak out to the caller.
Your impulse to shield implementation details from the user is a smart one, I think. I don't know common lisp, but in Scheme you do it by defining your helper function in the public function's lexical scope.
(define (fibonacci n)
(let fib-accum ((a 0)
(b 1)
(n n))
(if (< n 1)
a
(fib-accum b (+ a b) (- n 1)))))
The let expression defines a function and binds it to a name that's only visible within the let, then invokes the function.
I have used all the options you mention. All have their merits, so it boils down to personal preference.
I have arrived at using whatever I deem appropriate. If I think that leaving the &optional accumulators in the API might make sense for the user, I leave it in. For example, in a reduce-like function, the accumulator can be used by the user for providing a starting value. Otherwise, I'll often rewrite it as a loop, do, or iter (from the iterate library) form, if it makes sense to perceive it as such. Sometimes, the labels helper is also used.

Resources