Generative Recursion vs. Corecursion - recursion

Wikipedia's discussion of recursion indicates that generatively recursive functions "can often be interpreted as corecursive functions" (emphasis mine) while Wikipedia's discussion of corecursion makes it sound like the two are synonyms when it states "some authors refer to this [corecursion] as generative recursion."
What differentiates generative recursion and corecursion? Are they strict synonyms or are there times where something generatively recursive is not corecursive?

Related

Nuances of type theory of Isabelle Proof Assistant

I've read this paper
http://www21.in.tum.de/~berghofe/papers/TYPES2002_slides.pdf .
It contains calculus for Isabelle type system on page 7. In this calculus "term has type" and "proof has term".
1) Does exist more detailed paper on this calculus?
2) The only two rules which I do not understand are those which relate to constants.
2.1) What does "c_{[\vec{\tau}_n / \vec{\alpha}_n]}" mean? It is a proofs which are constants. We have, for example, constant "impI" on page 9. How these tau and alpha relate to "term of impI".
2.2) Could you please clarify the rule $\Sigma(c) [\vec{\tau}_n / \vec{\alpha}_n] $. ?
At first glance it may be translated as "any set of terms which depend on proof(!) constant with any amount of types instead of atomic types", but I don't see any sense in it.
3)I am also interested in papers on semantics of this calculus and/or correctness theorem of it.
I appreciate any related information. Thanks!

Where does the word "flatMap" originate from?

Nowdays flatMap is the most widely used name for correspondent operation on monad-like objects.
But I can't find where it has appeared for the first time and what has popularized it.
The oldest appearance I know about is in Scala.
In Haskell it is called bind.
In category theory Greek notation is used.
Partial answer, which hopefully provides some useful "seed nodes" to start more thorough search. My best guess:
1958 for map used for list processing,
1988 for flatten used in context of monads,
2004 for flatMap used as important method backing for-comprehensions in Scala.
The function / method name flatMap seems to be a portmanteau word composed from flatten and map. This makes sense, because whenever M is some monad, A,B some types, and a: M[A], f: A => M[B] a value and a function, then the implementations of map, flatMap and flatten should satisfy
a.flatMap(f) = a.map(f).flatten
(in Scala-syntax).
Let's first consider the both components map and flatten separately.
Map
The map-function seems to have been used to map over lists since time immemorial. My best guess would be that it came from Lisp (around 1958), and then spread to all other languages that had anything resembling higher-order functions.
Flatten
Given how many things are represented by lists in Lisp, I assume that flatten has also been used there for list processing.
The usage of flatten in context of monads must be much more recent, because the monads themselves have been introduced in programming quite a bit later. If we are looking for the usage of word "flatten" in the context of monadic computations, we probably should at least check the papers by Eugenio Moggi. Indeed, in "Computational Lambda-Calculus and Monads" from 1988, he uses the formulation:
Remark 2.2: Intuitively eta_A: A -> TA gives the inclusion of values into computations, while mu_A: T^2 A -> TA flatten a computation of a computation into a computation.
(typesetting changed by me, emphasis mine, text in italic as in original). I think it's interesting that Moggi talks about flattening computations, and not just lists.
Math notation / "Greek"
On the Greek used in mathematical notation: in category theory, the more common way to introduce monads is through the natural transformations that correspond to pure and flatten, the morphisms corresponding to flatMap are deemphasized. However, nobody calls it "flatten". For example, Maclane calls the natural transformation corresponding to method pure "unit" (not to be confused with method unit), and flatten is usually called "multiplication", in analogy with Monoids. One might investigate further whether it was different when the "triple"-terminology was more prevalent.
flatMap
To find the origins of the flatMap portmanteau word, I'd propose to start with the most prominent popularizer today, and then try to backtrack from there. Apparently, flatMap is a Scala meme, so it seems reasonable to start from Scala. One might check the standard libraries (especially the List data structure) of the usual suspects: the languages that influenced Scala. These "roots" are named in Chapter 1, section 1.4 in Odersky's "Programming in Scala":
C, C++ and C# are probably not where it came from.
In Java it was the other way around: the flatMap came from Scala into version 1.8 of Java.
I can't say anything about Smalltalk
Ruby definitely has flat_map on Enumerable, but I don't know anything about Ruby, and I don't want to dig into the source code to find out when it was introduced.
Algol and Simula: definitely not.
Strangely enough ML (SML) seems to get by without flatMap, it only has concat (which is essentially the same as flatten). OCaml's lists also seem to have flatten, but no flatMap.
As you've already mentioned, Haskell had all this long ago, but in Haskell it is called bind and written as an operator
Erlang has flatmap on lists, but I'm not sure whether this is the origin, or whether it was introduced later. The problem with Erlang is that it is from 1986, back then there was no github.
I can't say anything about Iswim, Beta and gbeta.
I think it would be fair to say that flatMap has been popularized by Scala, for two reasons:
The flatMap took a prominent role in the design of Scala's collection library, and few years later it turned out to generalize nicely to huge distributed collections (Apache Spark and similar tools)
The flatMap became the favorite toy of everyone who decided to do functional programming on the JVM properly (Scalaz and libraries inspired by Scalaz, like Scala Cats)
To sum it up: the "flatten" terminology has been used in the context of monads since the very beginning. Later, it was combined with map into flatMap, and popularized by Scala, or more specifically by frameworks such as Apache Spark and Scalaz.
flatmap was introduced in Section 2.2.3 Sequences as Conventional Interfaces in "Structure and Interpretation of Computer Programs" as
(define (flatmap proc seq)
(accumulate append nil (map proc seq)))
The first edition of the book appeared in 1985.

What is Lambda definability?

While I was reading about lambda calculus, came across the word Lambda definability. Can someone please explain what that is as I couldn't find any good resources on that.
Thanks
More generally, there is a line of research seeking to characterize "lambda definability" over a broad class of languages. "lambda definability" itself is typically relative to a semantics of a language given in terms of sets. For a type T in our language, write |T| for its interpretation as a set. Now, take an element of |T| -- call it e. We want to know if there is a term in our language -- call it x : T (x of type T), such that |x| is e. If there is such a term, then we say that t is lambda-definable.
Now, in our perfect world, when we interpret a language into sets, we would like to say that the sets associated with each type are precisely those that contain the lambda-definable elements of that type and only the lambda-definable elements (completeness). It would also be nice, perhaps to say that we can provide an algorithm to determine if a claimed element of a set has an associated lambda term (decidability).
Now, often we don't just model into sets, but into other funny mathematical constructions. And we don't model just from the lambda calculus, but from other related systems such as Plotkin's PCF or the like. But the property under study is typically still called "lambda-definability".
After decades of research there are still many open problems and questions in this regard -- while certain lower-order terms have been shown to have decidable lambda-definability (the classic results involve terms up to second-order), many terms do not yield so easily. This paper ("The Undecidability of lambda-Definability" by Ralph Loader) gives an important such undecidability result and characterizes some consequences: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.6860
See the Church-Turing thesis, where lambda-definable functions (from Church) are those that give us "effectively computable" functions. Turing showed that programs implementable on a Turing machine are equivalent to lambda-definable functions.

What is the most minimal functional programming language?

What is the most minimal functional programming language?
It depends on what you mean by minimal.
To start with, the ancestor of functional languages is, first and foremost, mathematical logic. The computational use of certain logics came after the fact. In a sense, many mathematical systems (the cores of which are usually quite minimal) could be called functional languages. But I doubt that's what you're after!
Best known is Alonzo Church's lambda calculus, of which there are variants and descendants:
The simplest form is what's called the untyped lambda calculus; this contains nothing but lambda abstractions, with no restrictions on their use. The creation of data structures using only anonymous functions is done with what's called Church encoding and represents data by fundamental operations on it; the number 5 becomes "repeat something 5 times", and so on.
Lisp-family languages are little more than untyped lambda calculus, augmented with atomic values, cons cells, and a handful of other things. I'd suspect Scheme is the most minimalist here, as if memory serves me it was created first as a teaching language.
The original purpose of the lambda calculus, that of describing logical proofs, failed when the untyped form was shown to be inconsistent, which is a polite term for "lets you prove that false is true". (Historical trivia: the paper proving this, which was a significant thing at the time, did so by writing a logical proof that, in computational terms, went into an infinite loop.) Anyway, the use as a logic was recovered by introducing typed lambda calculus. These tend not to be directly useful as programming languages, however, particularly since being logically sound makes the language not Turing-complete.
However, similarly to how Lisps derive from untyped lambda calculus, a typed lambda calculus extended with built-in recursion, algebraic data types, and a few other things gets you the extended ML-family of languages. These tend to be pretty minimal
at heart, with syntactic constructs having straightforward translations to lambda terms in many cases. Besides the obvious ML dialects, this also includes Haskell and a few other languages. I'm not aware of any especially minimalist typed functional languages, however; such a language would likely suffer from poor usability far worse than a minimalist untyped language.
So as far as lambda calculus variants go, the pure untyped lambda calculus with no extra features is Turing-complete and about as minimal as you can get!
However, arguably more minimal is to eliminate the concept of "variables" entirely--in fact, this was originally done to simplify meta-mathematical proofs about logical systems, if memory serves me--and use only higher-order functions called combinators. Here we have:
Combinatory logic itself, as originally invented by Moses Schönfinkel and developed extensively by Haskell Curry. Each combinator is defined by a simple substitution rule, for instance Sxyz = xz(yz). The lowercase letters are used like variables in this definition, but keep in mind that combinatory logic itself doesn't use variables, or assign names to anything at all. Combinatory logic is minimal, to be sure, but not too friendly as a programming language. Best-known is the SK combinator base. S is defined as in the example above; K is Kxy = x. Those two combinators alone suffice to make it Turing-complete! This is almost frighteningly minimal.
Unlambda is a language based on SK combinators, extending it with a few extra combinators with special properties. Less minimal, but lets you write "Hello World".
Even two combinators is more than you need, though. Various one-combinator bases exist; perhaps the best known is the iota Combinator, defined as ιx = xSK, which is used in a minimalist language also called Iota
Also of some note is Lazy K, which is distinguished from Unlambda by not introducing additional combinators, having no side effects, and using lazy evaluation. Basically, it's the Haskell of the combinator-based-esoteric-language world. It supports both the SK base, as well as the iota combinator.
Which of those strikes you as most "minimal" is probably a matter of taste.
The arguably most minimal functional languages are iota and Jot, because they use only one combinator (while unlambda needs two). Here is a short explanation: http://web.archive.org/web/20061105204247/http://ling.ucsd.edu/~barker/Iota/
I'd imagine the most minimal functional "programming language" would be lambda calculus.
BrainF*ck is a simple, easy to use programming language. Here's a quick rundown.
Imagine you have a near-infinite range of boxes, each empty. Luckily, you are not alone! You can move back and forth along the line, put things in them, and take them out. Though quite basic, with enough time you can do about anything: http://www.iwriteiam.nl/Ha_bf_inter.html. Here are the commands.
+ | add one to currrent box
- | take one from current box
> | move one box to the right
< | move one box to the left
[] | loop
. | print current value
, | input current value
other stuff to look at:
P" | simplified BF
language f | newer simplified BF
http://www2.gvsu.edu/miljours/bf.html | cool BF stuff/intro
https://www.esolangs.org/wiki/Language_list | list of similar langs/variants
An esoteric programming language (a.k.a. esolang) is a programming language designed to test the boundaries of computer programming language design, as a proof of concept, as software art, as a hacking interface to another language (particularly functional programming or procedural programminglanguages), or as a joke. The use of esotericdistinguishes these languages from programming languages that working developers use to write software. Usually, an esolang's creators do not intend the language to be used for mainstream programming, although some esoteric features, such as visuospatial syntax, have inspired practical applications in the arts. Such languages are often popular among hackers and hobbyists.

Tail-recursion optimization in Oz

In the chapter about function in the Oz tutorial, it says that:
similar to lazy functional languages
Oz allows certain forms of
tail-recursion optimizations that are
not found in certain strict functional
languages including Standard ML,
Scheme, and the concurrent functional
language Erlang. However, standard
function definitions in Oz are not
lazy.
It then goes on to show the following function which is tail-recursive in Oz:
fun {Map Xs F}
case Xs
of nil then nil
[] X|Xr then {F X}|{Map Xr F}
end
end
What this does is, it maps the empty list to the empty list and non-empty list, to the result of applying the function F to its head and then prepending that to the result of calling Map on the tail. In other languages this would not be tail recursive, because the last operation is the prepend, not the recursive call to Map.
So my question is: If "standard function definitions in Oz are not lazy", what does Oz do that languages like Scheme or Erlang can't (or won't?) to be able to perform tail-recursion optimization for this function? And exactly when is a function tail-recursive in Oz?
This is called Tail Recursion Modulo Cons. Basically, prepending to the list directly after the recursive call is the same as appending to the list directly before the recursive call (and thus building the list as a "side-effect" of the purely functional "loop"). This is a generalization of tail recursion that works not just with cons lists but any data constructor with constant operations.
It was first described (but not named) as a LISP compilation technique in 1974 by Daniel P. Friedman and David S. Wise in Technical Report TR19: Unwinding Structured Recursions into Iterations and it was formally named and introduced by David H. D. Warren in 1980 in the context of writing the first-ever Prolog compiler.
The interesting thing about Oz, though, is that TRMC is neither a language feature nor an explicit compiler optimization, it's just a side-effect of the language's execution semantics. Specifically, the fact that Oz is a declarative concurrent constraint language, which means that every variable is a dataflow variable (or "everything is a promise", including every storage location). Since everything is a promise, we can model returning from a function as first setting up the return value as a promise, and then later on fulfilling it.
Peter van Roy, co-author of the book Concepts, Techniques, and Models of Computer Programming by Peter Van Roy and Seif Haridi, also one of the designers of Oz, and one of its implementators, explains how exactly TRMC works in a comment thread on Lambda the Ultimate: Tail-recursive map and declarative agents:
The above example of bad Scheme code turns into good tail-recursive Oz code when translated directly into Oz syntax. This gives:
fun {Map F Xs}
if Xs==nil then nil
else {F Xs.1}|{Map F Xs.2} end
end
This is because Oz has single-assignment variables. To understand the execution, we translate this example into the Oz kernel language (I give just a partial translation for clarity):
proc {Map F Xs Ys}
if Xs==nil then Ys=nil
else local Y Yr in
Ys=Y|Yr
{F Xs.1 Y}
{Map F Xs.2 Yr}
end end
end
That is, Map is tail-recursive because Yr is initially unbound. This is not just a clever trick; it is profound because it allows declarative concurrency and declarative multi-agent systems.
I am not too familiar with lazy functional languages, but if you think about the function Map in your question, it is easy to translate to a tail-recursive implementation if temporarily incomplete values in the heap are allowed (muted into more complete values one call at a time).
I have to assume that they are talking about this transformation in Oz. Lispers used to do this optimization by hand -- all values were mutable, in this case a function called setcdr would be used -- but you had to know what you were doing. Computers did not always have gigabytes of memory. It was justified to do this by hand, it arguably no longer is.
Back to your question, others modern languages do not do it automatically probably because it would be possible to observe the incomplete value while it is being built, and this must be what Oz has found a solution to. What other differences are there in Oz as compared to other languages that would explain it?

Resources