Scheme let statement - functional-programming

In scheme which is a functional programming language, there is no assignment statement.
But in a let statement
(let ((x 2))
(+ x 3))
You are assigning 2 to x, so why doesn't this violate the principle that there is no assignment statements in functional programming?

The statement "Scheme which is a functional programming language" is incorrect. In Scheme, a functional-programming style is encouraged, but not forced. In fact, you can use set! (an assignment statement!) for modifying the value of any variable:
(define x 10)
(set! x (+ x 3))
x
=> 13
Regarding the let statement of the question, remember that an expression such as this one:
(let ((x 10))
(+ x 3))
=> 13
... it's just syntactic sugar, and under the hood it's implemented like this:
((lambda (x)
(+ x 3))
10)
=> 13
Notice that a let performs one-time single assignments on its variables, so it doesn't violate any purely functional programming principle per se, the following can be affirmed of a let expression:
An evaluation of an expression does not have a side effect if it does not change an observable state of the machine, and produces same values for same input
Also, quoting from Wikipedia:
Impure functional languages provide both single assignment as well as true assignment (though true assignment is typically used with less frequency than in imperative programming languages). For example, in Scheme, both single assignment (with let) and true assignment (with set!) can be used on all variables, and specialized primitives are provided for destructive update inside lists, vectors, strings, etc.

http://en.wikipedia.org/wiki/Assignment_(computer_science)#Single_assignment
Basically, it's a single assignment that's allowable. Other assignment is not "allowed" because of side effects.
Edit: allowed in quotations because, as Oscar stated, it is not mandatory but suggested.

Related

How to fix the infinite loop error of "f = lambda x: f(x)+1" in a functional programming language?

Consider the following code in python:
f = lambda x: x
f = lambda x: f(x)+1
f(1)
Python throws an "infinite loop" error while running the last line, which is clear in the way that it interprets the second line as a recursive formula for f.
But the second line seems resonable if one substitutes the 'value' of f in the right side, and then assigns the resulting function to f (in the left).
Does there exists a straightforward way for fixing this error in python (or another language which can work with functions) via lambda calculus operations?
I asked this question just for curiosity to know much more about functional languages, but it seems to me that the answer helps to produce loop calculations on functions!
Sure. In Lisp/Scheme family, you can use let* for this purpose:
(let* ((f (lambda (x) x))
(f (lambda (x) (+ (f x) 1))))
(display (f 1)))
Note that you'll find Scheme syntax to be much closer to lambda-calculus, aside from the prefix notation. The let* construct sequentially defines names, allowing the fist name to be used in the body of the second, even if you "shadow" it.
In Python, you'll have to name the functions separately, something like this:
f0 = lambda x: x
f1 = lambda x: f0(x) + 1
print(f1(1))
If you want to study lambda-calculus, especially the untyped kind, Scheme is your best choice as most lambda-calculus constructs will map directly to it, modulo the prefix syntax. For typed lambda-calculus, a good choice would be a language like Haskell. I personally wouldn't use Python to study functional programming, as it conflates the two styles in ways that will prove to be a hindrance; though of course it's doable.

What is the name of this q/kdb+ feature? Does any flavor of LISP implement it? How?

The q programming language has a feature (which this tutorial calls "function projection") where a function of two or more parameters can be called with fewer parameters than it requires, but the result is an intermediate object, and the function will not be executed until all remaining parameters are passed; one way to see it is that functions behave like multi-dimensional arrays, so that (f[x])[y] is equivalent to f[x;y]. For example ...
q)add:{x+y}
q)add[42;]
{x+y}[42;]
q)add[42;][3]
45
q)g:add[42;]
q)g[3]
45
Since q does not have lexical scoping, this features become very useful in obtaining lexical scoping behavior by passing the necessary variables to an inner function as a partial list of parameters; e.g. a print parameter decorator can be constructed using this feature:
q)printParameterDecorator:{[f] {[f;x] -1 "Input: ",string x; f x}f};
q)f: printParameterDecorator (2+);
q)f 3
Input: 3
5
My questions:
Is the term "function projection" a standard term? Or does this feature carry a different name in the functional programming literature?
Does any variety of LISP implement this feature? Which ones?
Could you provide some example LISP code please?
Is the term "function projection" a standard term? Or does this feature carry a different name in the functional programming literature?
No, you usually call it partial application.
Does any variety of LISP implement this feature? Which ones?
Practically all Lisp allow you to partially apply a function, but usually you need to write a closure explicitly. For example in Common Lisp:
(defun add (x y)
(+ x y))
The utility function curry from alexandria can be used to create a closure:
USER> (alexandria:curry #'add 42)
#<CLOSURE (LAMBDA (&REST ALEXANDRIA.1.0.0::MORE) :IN CURRY) {1019FE178B}>
USER> (funcall * 3) ;; asterisk (*) is the previous value, the closure
45
The resulting closure is equivalent to the following one:
(lambda (y) (add 42 y))
Some functional languages like OCaml only allow functions to have a single parameter, but syntactically you can define functions of multiple parameters:
(fun x y -> x + y)
The above is equivalent to:
(function x -> (function y -> x + y))
See also What is the difference between currying and partial application?
Nb. in fact the q documentation refers to it as partial application:
Notationally, projection is a partial application in which some arguments are supplied and the others are omitted
I think another way doing this :
q)f:2+
q)g:{"result: ",string x}
q)'[g;f]3
"result: 5"
It is composite function, passing 3 to f, then the result from f will be passed to g.
I'm not sure if it is LISP, but it could achieve the same result.

How does term-rewriting based evaluation work?

The Pure programming language is apparently based on term rewriting, instead of the lambda-calculus that traditionally underlies similar-looking languages.
...what qualitative, practical difference does this make? In fact, what is the difference in the way that it evaluates expressions?
The linked page provides a lot of examples of term rewriting being useful, but it doesn't actually describe what it does differently from function application, except that it has rather flexible pattern matching (and pattern matching as it appears in Haskell and ML is nice, but not fundamental to the evaluation strategy). Values are matched against the left side of a definition and substituted into the right side - isn't this just beta reduction?
The matching of patterns, and substitution into output expressions, superficially looks a bit like syntax-rules to me (or even the humble #define), but the main feature of that is obviously that it happens before rather than during evaluation, whereas Pure is fully dynamic and there is no obvious phase separation in its evaluation system (and in fact otherwise Lisp macro systems have always made a big noise about how they are not different from function application). Being able to manipulate symbolic expression values is cool'n'all, but also seems like an artifact of the dynamic type system rather than something core to the evaluation strategy (pretty sure you could overload operators in Scheme to work on symbolic values; in fact you can even do it in C++ with expression templates).
So what is the mechanical/operational difference between term rewriting (as used by Pure) and traditional function application, as the underlying model of evaluation, when substitution happens in both?
Term rewriting doesn't have to look anything like function application, but languages like Pure emphasise this style because a) beta-reduction is simple to define as a rewrite rule and b) functional programming is a well-understood paradigm.
A counter-example would be a blackboard or tuple-space paradigm, which term-rewriting is also well-suited for.
One practical difference between beta-reduction and full term-rewriting is that rewrite rules can operate on the definition of an expression, rather than just its value. This includes pattern-matching on reducible expressions:
-- Functional style
map f nil = nil
map f (cons x xs) = cons (f x) (map f xs)
-- Compose f and g before mapping, to prevent traversing xs twice
result = map (compose f g) xs
-- Term-rewriting style: spot double-maps before they're reduced
map f (map g xs) = map (compose f g) xs
map f nil = nil
map f (cons x xs) = cons (f x) (map f xs)
-- All double maps are now automatically fused
result = map f (map g xs)
Notice that we can do this with LISP macros (or C++ templates), since they are a term-rewriting system, but this style blurs LISP's crisp distinction between macros and functions.
CPP's #define isn't equivalent, since it's not safe or hygenic (sytactically-valid programs can become invalid after pre-processing).
We can also define ad-hoc clauses to existing functions as we need them, eg.
plus (times x y) (times x z) = times x (plus y z)
Another practical consideration is that rewrite rules must be confluent if we want deterministic results, ie. we get the same result regardless of which order we apply the rules in. No algorithm can check this for us (it's undecidable in general) and the search space is far too large for individual tests to tell us much. Instead we must convince ourselves that our system is confluent by some formal or informal proof; one way would be to follow systems which are already known to be confluent.
For example, beta-reduction is known to be confluent (via the Church-Rosser Theorem), so if we write all of our rules in the style of beta-reductions then we can be confident that our rules are confluent. Of course, that's exactly what functional programming languages do!

Functional programming and the closure term birth

I'm studying functional programming and lambda calculus but I'm wondering
if the closure term is also present in the Church's original work or it's a more modern
term strictly concerned to programming languages.
I remember that in the Church's work there were the terms: free variable, closed into...,
and so on.
It is a more modern term, due to (as many things in modern FP are), P. J. Landin (1964), The mechanical evaluation of expressions
Also we represent the value of a λ-expression by a
bundle of information called a "closure," comprising
the λ-expression and the environment relative to which
it was evaluated.
Consider the following function definition in Scheme:
(define (adder a)
(lambda (x) (+ a x)))
The notion of explicit closure is not required in the pure lambda calculus, because variable substitution takes care of it. The above code snippet can be translated
λa λx . (a + x)
When you apply this to a value z, it becomes
λx . (z + x)
by β-reduction, which involves substitution. You can call this closure over a if you want.
(The example uses a function argument, but this holds true for any variable binding, since in the pure lambda calculus all variable bindings must occur via λ terms.)

Why is foldl defined in a strange way in Racket?

In Haskell, like in many other functional languages, the function foldl is defined such that, for example, foldl (-) 0 [1,2,3,4] = -10.
This is OK, because foldl (-) 0 [1, 2,3,4] is, by definition, ((((0 - 1) - 2) - 3) - 4).
But, in Racket, (foldl - 0 '(1 2 3 4)) is 2, because Racket "intelligently" calculates like this: (4 - (3 - (2 - (1 - 0)))), which indeed is 2.
Of course, if we define auxiliary function flip, like this:
(define (flip bin-fn)
(lambda (x y)
(bin-fn y x)))
then we could in Racket achieve the same behavior as in Haskell: instead of (foldl - 0 '(1 2 3 4)) we can write: (foldl (flip -) 0 '(1 2 3 4))
The question is: Why is foldl in racket defined in such an odd (nonstandard and nonintuitive) way, differently than in any other language?
The Haskell definition is not uniform. In Racket, the function to both folds have the same order of inputs, and therefore you can just replace foldl by foldr and get the same result. If you do that with the Haskell version you'd get a different result (usually) — and you can see this in the different types of the two.
(In fact, I think that in order to do a proper comparison you should avoid these toy numeric examples where both of the type variables are integers.)
This has the nice byproduct where you're encouraged to choose either foldl or foldr according to their semantic differences. My guess is that with Haskell's order you're likely to choose according to the operation. You have a good example for this: you've used foldl because you want to subtract each number — and that's such an "obvious" choice that it's easy to overlook the fact that foldl is usually a bad choice in a lazy language.
Another difference is that the Haskell version is more limited than the Racket version in the usual way: it operates on exactly one input list, whereas Racket can accept any number of lists. This makes it more important to have a uniform argument order for the input function).
Finally, it is wrong to assume that Racket diverged from "many other functional languages", since folding is far from a new trick, and Racket has roots that are far older than Haskell (or these other languages). The question could therefore go the other way: why is Haskell's foldl defined in a strange way? (And no, (-) is not a good excuse.)
Historical update:
Since this seems to bother people again and again, I did a little bit of legwork. This is not definitive in any way, just my second-hand guessing. Feel free to edit this if you know more, or even better, email the relevant people and ask. Specifically, I don't know the dates where these decisions were made, so the following list is in rough order.
First there was Lisp, and no mention of "fold"ing of any kind. Instead, Lisp has reduce which is very non-uniform, especially if you consider its type. For example, :from-end is a keyword argument that determines whether it's a left or a right scan and it uses different accumulator functions which means that the accumulator type depends on that keyword. This is in addition to other hacks: usually the first value is taken from the list (unless you specify an :initial-value). Finally, if you don't specify an :initial-value, and the list is empty, it will actually apply the function on zero arguments to get a result.
All of this means that reduce is usually used for what its name suggests: reducing a list of values into a single value, where the two types are usually the same. The conclusion here is that it's serving a kind of a similar purpose to folding, but it's not nearly as useful as the generic list iteration construct that you get with folding. I'm guessing that this means that there's no strong relation between reduce and the later fold operations.
The first relevant language that follows Lisp and has a proper fold is ML. The choice that was made there, as noted in newacct's answer below, was to go with the uniform types version (ie, what Racket uses).
The next reference is Bird & Wadler's ItFP (1988), which uses different types (as in Haskell). However, they note in the appendix that Miranda has the same type (as in Racket).
Miranda later on switched the argument order (ie, moved from the Racket order to the Haskell one). Specifically, that text says:
WARNING - this definition of foldl differs from that in older versions of Miranda. The one here is the same as that in Bird and Wadler (1988). The old definition had the two args of `op' reversed.
Haskell took a lot of stuff from Miranda, including the different types. (But of course I don't know the dates so maybe the Miranda change was due to Haskell.) In any case, it's clear at this point that there was no consensus, hence the reversed question above holds.
OCaml went with the Haskell direction and uses different types
I'm guessing that "How to Design Programs" (aka HtDP) was written at roughly the same period, and they chose the same type. There is, however, no motivation or explanation — and in fact, after that exercise it's simply mentioned as one of the built-in functions.
Racket's implementation of the fold operations was, of course, the "built-ins" that are mentioned here.
Then came SRFI-1, and the choice was to use the same-type version (as Racket). This decision was question by John David Stone, who points at a comment in the SRFI that says
Note: MIT Scheme and Haskell flip F's arg order for their reduce and fold functions.
Olin later addressed this: all he said was:
Good point, but I want consistency between the two functions.
state-value first: srfi-1, SML
state-value last: Haskell
Note in particular his use of state-value, which suggests a view where consistent types are a possibly more important point than operator order.
"differently than in any other language"
As a counter-example, Standard ML (ML is a very old and influential functional language)'s foldl also works this way: http://www.standardml.org/Basis/list.html#SIG:LIST.foldl:VAL
Racket's foldl and foldr (and also SRFI-1's fold and fold-right) have the property that
(foldr cons null lst) = lst
(foldl cons null lst) = (reverse lst)
I speculate the argument order was chosen for that reason.
From the Racket documentation, the description of foldl:
(foldl proc init lst ...+) → any/c
Two points of interest for your question are mentioned:
the input lsts are traversed from left to right
And
foldl processes the lsts in constant space
I'm gonna speculate on how the implementation for that might look like, with a single list for simplicity's sake:
(define (my-foldl proc init lst)
(define (iter lst acc)
(if (null? lst)
acc
(iter (cdr lst) (proc (car lst) acc))))
(iter lst init))
As you can see, the requirements of left-to-right traversal and constant space are met (notice the tail recursion in iter), but the order of the arguments for proc was never specified in the description. Hence, the result of calling the above code would be:
(my-foldl - 0 '(1 2 3 4))
> 2
If we had specified the order of the arguments for proc in this way:
(proc acc (car lst))
Then the result would be:
(my-foldl - 0 '(1 2 3 4))
> -10
My point is, the documentation for foldl doesn't make any assumptions on the evaluation order of the arguments for proc, it only has to guarantee that constant space is used and that the elements in the list are evaluated from left to right.
As a side note, you can get the desired evaluation order for your expression by simply writing this:
(- 0 1 2 3 4)
> -10

Resources