Examples of recursive predicates - recursion

In Stone's Algorithms for Functional Programming, he gives a design pattern for recursively defined predicates, which in Scheme is
(define (check stop? continue? step)
(rec (checker . arguments)
(or (apply stop? arguments)
(and (apply continue? arguments)
(apply (pipe step checker) arguments)))))
where pipe is the author's function to compose two functions in diagrammatic order, ((pipe f g) x = (g (f x)).
So, for instance, to test whether a function is a power of two, you could define
(define power-of-two? (check (sect = <> 1) even? halve))
where (sect = <> 1) is the author's notation for currying, equivalent to lambda x: x == 1.
Clearly a lot of predicates could be implemented recursively but it would not be useful. And clearly there are some recursive predicates that wouldn't use this pattern, like predicates on trees.
What are some classic predicates that fit this pattern? I guess testing if something is in the Cantor set, but that's almost the same as the above.

It is not clear what you are asking but your example is a classical example of programming with combinators.
The combinators are functions that take as input functions and return functions.
Combinators are fundamental in functional programming. Using them you can implement everything. For instance, if you define a data structure for an object as a function you can compose objects using some combinator and get a new object.
The combinator from your exemple seems to be useful to check some predicate about the composition of some monads.

Related

How to fix the infinite loop error of "f = lambda x: f(x)+1" in a functional programming language?

Consider the following code in python:
f = lambda x: x
f = lambda x: f(x)+1
f(1)
Python throws an "infinite loop" error while running the last line, which is clear in the way that it interprets the second line as a recursive formula for f.
But the second line seems resonable if one substitutes the 'value' of f in the right side, and then assigns the resulting function to f (in the left).
Does there exists a straightforward way for fixing this error in python (or another language which can work with functions) via lambda calculus operations?
I asked this question just for curiosity to know much more about functional languages, but it seems to me that the answer helps to produce loop calculations on functions!
Sure. In Lisp/Scheme family, you can use let* for this purpose:
(let* ((f (lambda (x) x))
(f (lambda (x) (+ (f x) 1))))
(display (f 1)))
Note that you'll find Scheme syntax to be much closer to lambda-calculus, aside from the prefix notation. The let* construct sequentially defines names, allowing the fist name to be used in the body of the second, even if you "shadow" it.
In Python, you'll have to name the functions separately, something like this:
f0 = lambda x: x
f1 = lambda x: f0(x) + 1
print(f1(1))
If you want to study lambda-calculus, especially the untyped kind, Scheme is your best choice as most lambda-calculus constructs will map directly to it, modulo the prefix syntax. For typed lambda-calculus, a good choice would be a language like Haskell. I personally wouldn't use Python to study functional programming, as it conflates the two styles in ways that will prove to be a hindrance; though of course it's doable.

What is the name of this q/kdb+ feature? Does any flavor of LISP implement it? How?

The q programming language has a feature (which this tutorial calls "function projection") where a function of two or more parameters can be called with fewer parameters than it requires, but the result is an intermediate object, and the function will not be executed until all remaining parameters are passed; one way to see it is that functions behave like multi-dimensional arrays, so that (f[x])[y] is equivalent to f[x;y]. For example ...
q)add:{x+y}
q)add[42;]
{x+y}[42;]
q)add[42;][3]
45
q)g:add[42;]
q)g[3]
45
Since q does not have lexical scoping, this features become very useful in obtaining lexical scoping behavior by passing the necessary variables to an inner function as a partial list of parameters; e.g. a print parameter decorator can be constructed using this feature:
q)printParameterDecorator:{[f] {[f;x] -1 "Input: ",string x; f x}f};
q)f: printParameterDecorator (2+);
q)f 3
Input: 3
5
My questions:
Is the term "function projection" a standard term? Or does this feature carry a different name in the functional programming literature?
Does any variety of LISP implement this feature? Which ones?
Could you provide some example LISP code please?
Is the term "function projection" a standard term? Or does this feature carry a different name in the functional programming literature?
No, you usually call it partial application.
Does any variety of LISP implement this feature? Which ones?
Practically all Lisp allow you to partially apply a function, but usually you need to write a closure explicitly. For example in Common Lisp:
(defun add (x y)
(+ x y))
The utility function curry from alexandria can be used to create a closure:
USER> (alexandria:curry #'add 42)
#<CLOSURE (LAMBDA (&REST ALEXANDRIA.1.0.0::MORE) :IN CURRY) {1019FE178B}>
USER> (funcall * 3) ;; asterisk (*) is the previous value, the closure
45
The resulting closure is equivalent to the following one:
(lambda (y) (add 42 y))
Some functional languages like OCaml only allow functions to have a single parameter, but syntactically you can define functions of multiple parameters:
(fun x y -> x + y)
The above is equivalent to:
(function x -> (function y -> x + y))
See also What is the difference between currying and partial application?
Nb. in fact the q documentation refers to it as partial application:
Notationally, projection is a partial application in which some arguments are supplied and the others are omitted
I think another way doing this :
q)f:2+
q)g:{"result: ",string x}
q)'[g;f]3
"result: 5"
It is composite function, passing 3 to f, then the result from f will be passed to g.
I'm not sure if it is LISP, but it could achieve the same result.

How does term-rewriting based evaluation work?

The Pure programming language is apparently based on term rewriting, instead of the lambda-calculus that traditionally underlies similar-looking languages.
...what qualitative, practical difference does this make? In fact, what is the difference in the way that it evaluates expressions?
The linked page provides a lot of examples of term rewriting being useful, but it doesn't actually describe what it does differently from function application, except that it has rather flexible pattern matching (and pattern matching as it appears in Haskell and ML is nice, but not fundamental to the evaluation strategy). Values are matched against the left side of a definition and substituted into the right side - isn't this just beta reduction?
The matching of patterns, and substitution into output expressions, superficially looks a bit like syntax-rules to me (or even the humble #define), but the main feature of that is obviously that it happens before rather than during evaluation, whereas Pure is fully dynamic and there is no obvious phase separation in its evaluation system (and in fact otherwise Lisp macro systems have always made a big noise about how they are not different from function application). Being able to manipulate symbolic expression values is cool'n'all, but also seems like an artifact of the dynamic type system rather than something core to the evaluation strategy (pretty sure you could overload operators in Scheme to work on symbolic values; in fact you can even do it in C++ with expression templates).
So what is the mechanical/operational difference between term rewriting (as used by Pure) and traditional function application, as the underlying model of evaluation, when substitution happens in both?
Term rewriting doesn't have to look anything like function application, but languages like Pure emphasise this style because a) beta-reduction is simple to define as a rewrite rule and b) functional programming is a well-understood paradigm.
A counter-example would be a blackboard or tuple-space paradigm, which term-rewriting is also well-suited for.
One practical difference between beta-reduction and full term-rewriting is that rewrite rules can operate on the definition of an expression, rather than just its value. This includes pattern-matching on reducible expressions:
-- Functional style
map f nil = nil
map f (cons x xs) = cons (f x) (map f xs)
-- Compose f and g before mapping, to prevent traversing xs twice
result = map (compose f g) xs
-- Term-rewriting style: spot double-maps before they're reduced
map f (map g xs) = map (compose f g) xs
map f nil = nil
map f (cons x xs) = cons (f x) (map f xs)
-- All double maps are now automatically fused
result = map f (map g xs)
Notice that we can do this with LISP macros (or C++ templates), since they are a term-rewriting system, but this style blurs LISP's crisp distinction between macros and functions.
CPP's #define isn't equivalent, since it's not safe or hygenic (sytactically-valid programs can become invalid after pre-processing).
We can also define ad-hoc clauses to existing functions as we need them, eg.
plus (times x y) (times x z) = times x (plus y z)
Another practical consideration is that rewrite rules must be confluent if we want deterministic results, ie. we get the same result regardless of which order we apply the rules in. No algorithm can check this for us (it's undecidable in general) and the search space is far too large for individual tests to tell us much. Instead we must convince ourselves that our system is confluent by some formal or informal proof; one way would be to follow systems which are already known to be confluent.
For example, beta-reduction is known to be confluent (via the Church-Rosser Theorem), so if we write all of our rules in the style of beta-reductions then we can be confident that our rules are confluent. Of course, that's exactly what functional programming languages do!

Scheme let statement

In scheme which is a functional programming language, there is no assignment statement.
But in a let statement
(let ((x 2))
(+ x 3))
You are assigning 2 to x, so why doesn't this violate the principle that there is no assignment statements in functional programming?
The statement "Scheme which is a functional programming language" is incorrect. In Scheme, a functional-programming style is encouraged, but not forced. In fact, you can use set! (an assignment statement!) for modifying the value of any variable:
(define x 10)
(set! x (+ x 3))
x
=> 13
Regarding the let statement of the question, remember that an expression such as this one:
(let ((x 10))
(+ x 3))
=> 13
... it's just syntactic sugar, and under the hood it's implemented like this:
((lambda (x)
(+ x 3))
10)
=> 13
Notice that a let performs one-time single assignments on its variables, so it doesn't violate any purely functional programming principle per se, the following can be affirmed of a let expression:
An evaluation of an expression does not have a side effect if it does not change an observable state of the machine, and produces same values for same input
Also, quoting from Wikipedia:
Impure functional languages provide both single assignment as well as true assignment (though true assignment is typically used with less frequency than in imperative programming languages). For example, in Scheme, both single assignment (with let) and true assignment (with set!) can be used on all variables, and specialized primitives are provided for destructive update inside lists, vectors, strings, etc.
http://en.wikipedia.org/wiki/Assignment_(computer_science)#Single_assignment
Basically, it's a single assignment that's allowable. Other assignment is not "allowed" because of side effects.
Edit: allowed in quotations because, as Oscar stated, it is not mandatory but suggested.

Nested functions: Improper use of side-effects?

I'm learning functional programming, and have tried to solve a couple problems in a functional style. One thing I experienced, while dividing up my problem into functions, was it seemed I had two options: use several disparate functions with similar parameter lists, or using nested functions which, as closures, can simply refer to bindings in the parent function.
Though I ended up going with the second approach, because it made function calls smaller and it seemed to "feel" better, from my reading it seems like I may be missing one of the main points of functional programming, in that this seems "side-effecty"? Now granted, these nested functions cannot modify the outer bindings, as the language I was using prevents that, but if you look at each individual inner function, you can't say "given the same parameters, this function will return the same results" because they do use the variables from the parent scope... am I right?
What is the desirable way to proceed?
Thanks!
Functional programming isn't all-or-nothing. If nesting the functions makes more sense, I'd go with that approach. However, If you really want the internal functions to be purely functional, explicitly pass all the needed parameters into them.
Here's a little example in Scheme:
(define (foo a)
(define (bar b)
(+ a b)) ; getting a from outer scope, not purely functional
(bar 3))
(define (foo a)
(define (bar a b)
(+ a b)) ; getting a from function parameters, purely functional
(bar a 3))
(define (bar a b) ; since this is purely functional, we can remove it from its
(+ a b)) ; environment and it still works
(define (foo a)
(bar a 3))
Personally, I'd go with the first approach, but either will work equally well.
Nesting functions is an excellent way to divide up the labor in many functions. It's not really "side-effecty"; if it helps, think of the captured variables as implicit parameters.
One example where nested functions are useful is to replace loops. The parameters to the nested function can act as induction variables which accumulate values. A simple example:
let factorial n =
let rec facHelper p n =
if n = 1 then p else facHelper (p*n) (n-1)
in
facHelper 1 n
In this case, it wouldn't really make sense to declare a function like facHelper globally, since users shouldn't have to worry about the p parameter.
Be aware, however, that it can be difficult to test nested functions individually, since they cannot be referred to outside of their parent.
Consider the following (contrived) Haskell snippet:
putLines :: [String] -> IO ()
putLines lines = putStr string
where string = concat lines
string is a locally bound named constant. But isn't it also a function taking no arguments that closes over lines and is therefore referentially intransparent? (In Haskell, constants and nullary functions are indeed indistinguishable!) Would you consider the above code “side-effecty” or non-functional because of this?

Resources