What is the most minimal functional programming language? - functional-programming

What is the most minimal functional programming language?

It depends on what you mean by minimal.
To start with, the ancestor of functional languages is, first and foremost, mathematical logic. The computational use of certain logics came after the fact. In a sense, many mathematical systems (the cores of which are usually quite minimal) could be called functional languages. But I doubt that's what you're after!
Best known is Alonzo Church's lambda calculus, of which there are variants and descendants:
The simplest form is what's called the untyped lambda calculus; this contains nothing but lambda abstractions, with no restrictions on their use. The creation of data structures using only anonymous functions is done with what's called Church encoding and represents data by fundamental operations on it; the number 5 becomes "repeat something 5 times", and so on.
Lisp-family languages are little more than untyped lambda calculus, augmented with atomic values, cons cells, and a handful of other things. I'd suspect Scheme is the most minimalist here, as if memory serves me it was created first as a teaching language.
The original purpose of the lambda calculus, that of describing logical proofs, failed when the untyped form was shown to be inconsistent, which is a polite term for "lets you prove that false is true". (Historical trivia: the paper proving this, which was a significant thing at the time, did so by writing a logical proof that, in computational terms, went into an infinite loop.) Anyway, the use as a logic was recovered by introducing typed lambda calculus. These tend not to be directly useful as programming languages, however, particularly since being logically sound makes the language not Turing-complete.
However, similarly to how Lisps derive from untyped lambda calculus, a typed lambda calculus extended with built-in recursion, algebraic data types, and a few other things gets you the extended ML-family of languages. These tend to be pretty minimal
at heart, with syntactic constructs having straightforward translations to lambda terms in many cases. Besides the obvious ML dialects, this also includes Haskell and a few other languages. I'm not aware of any especially minimalist typed functional languages, however; such a language would likely suffer from poor usability far worse than a minimalist untyped language.
So as far as lambda calculus variants go, the pure untyped lambda calculus with no extra features is Turing-complete and about as minimal as you can get!
However, arguably more minimal is to eliminate the concept of "variables" entirely--in fact, this was originally done to simplify meta-mathematical proofs about logical systems, if memory serves me--and use only higher-order functions called combinators. Here we have:
Combinatory logic itself, as originally invented by Moses Schönfinkel and developed extensively by Haskell Curry. Each combinator is defined by a simple substitution rule, for instance Sxyz = xz(yz). The lowercase letters are used like variables in this definition, but keep in mind that combinatory logic itself doesn't use variables, or assign names to anything at all. Combinatory logic is minimal, to be sure, but not too friendly as a programming language. Best-known is the SK combinator base. S is defined as in the example above; K is Kxy = x. Those two combinators alone suffice to make it Turing-complete! This is almost frighteningly minimal.
Unlambda is a language based on SK combinators, extending it with a few extra combinators with special properties. Less minimal, but lets you write "Hello World".
Even two combinators is more than you need, though. Various one-combinator bases exist; perhaps the best known is the iota Combinator, defined as ιx = xSK, which is used in a minimalist language also called Iota
Also of some note is Lazy K, which is distinguished from Unlambda by not introducing additional combinators, having no side effects, and using lazy evaluation. Basically, it's the Haskell of the combinator-based-esoteric-language world. It supports both the SK base, as well as the iota combinator.
Which of those strikes you as most "minimal" is probably a matter of taste.

The arguably most minimal functional languages are iota and Jot, because they use only one combinator (while unlambda needs two). Here is a short explanation: http://web.archive.org/web/20061105204247/http://ling.ucsd.edu/~barker/Iota/

I'd imagine the most minimal functional "programming language" would be lambda calculus.

BrainF*ck is a simple, easy to use programming language. Here's a quick rundown.
Imagine you have a near-infinite range of boxes, each empty. Luckily, you are not alone! You can move back and forth along the line, put things in them, and take them out. Though quite basic, with enough time you can do about anything: http://www.iwriteiam.nl/Ha_bf_inter.html. Here are the commands.
+ | add one to currrent box
- | take one from current box
> | move one box to the right
< | move one box to the left
[] | loop
. | print current value
, | input current value
other stuff to look at:
P" | simplified BF
language f | newer simplified BF
http://www2.gvsu.edu/miljours/bf.html | cool BF stuff/intro
https://www.esolangs.org/wiki/Language_list | list of similar langs/variants

An esoteric programming language (a.k.a. esolang) is a programming language designed to test the boundaries of computer programming language design, as a proof of concept, as software art, as a hacking interface to another language (particularly functional programming or procedural programminglanguages), or as a joke. The use of esotericdistinguishes these languages from programming languages that working developers use to write software. Usually, an esolang's creators do not intend the language to be used for mainstream programming, although some esoteric features, such as visuospatial syntax, have inspired practical applications in the arts. Such languages are often popular among hackers and hobbyists.

Related

Where does the word "flatMap" originate from?

Nowdays flatMap is the most widely used name for correspondent operation on monad-like objects.
But I can't find where it has appeared for the first time and what has popularized it.
The oldest appearance I know about is in Scala.
In Haskell it is called bind.
In category theory Greek notation is used.
Partial answer, which hopefully provides some useful "seed nodes" to start more thorough search. My best guess:
1958 for map used for list processing,
1988 for flatten used in context of monads,
2004 for flatMap used as important method backing for-comprehensions in Scala.
The function / method name flatMap seems to be a portmanteau word composed from flatten and map. This makes sense, because whenever M is some monad, A,B some types, and a: M[A], f: A => M[B] a value and a function, then the implementations of map, flatMap and flatten should satisfy
a.flatMap(f) = a.map(f).flatten
(in Scala-syntax).
Let's first consider the both components map and flatten separately.
Map
The map-function seems to have been used to map over lists since time immemorial. My best guess would be that it came from Lisp (around 1958), and then spread to all other languages that had anything resembling higher-order functions.
Flatten
Given how many things are represented by lists in Lisp, I assume that flatten has also been used there for list processing.
The usage of flatten in context of monads must be much more recent, because the monads themselves have been introduced in programming quite a bit later. If we are looking for the usage of word "flatten" in the context of monadic computations, we probably should at least check the papers by Eugenio Moggi. Indeed, in "Computational Lambda-Calculus and Monads" from 1988, he uses the formulation:
Remark 2.2: Intuitively eta_A: A -> TA gives the inclusion of values into computations, while mu_A: T^2 A -> TA flatten a computation of a computation into a computation.
(typesetting changed by me, emphasis mine, text in italic as in original). I think it's interesting that Moggi talks about flattening computations, and not just lists.
Math notation / "Greek"
On the Greek used in mathematical notation: in category theory, the more common way to introduce monads is through the natural transformations that correspond to pure and flatten, the morphisms corresponding to flatMap are deemphasized. However, nobody calls it "flatten". For example, Maclane calls the natural transformation corresponding to method pure "unit" (not to be confused with method unit), and flatten is usually called "multiplication", in analogy with Monoids. One might investigate further whether it was different when the "triple"-terminology was more prevalent.
flatMap
To find the origins of the flatMap portmanteau word, I'd propose to start with the most prominent popularizer today, and then try to backtrack from there. Apparently, flatMap is a Scala meme, so it seems reasonable to start from Scala. One might check the standard libraries (especially the List data structure) of the usual suspects: the languages that influenced Scala. These "roots" are named in Chapter 1, section 1.4 in Odersky's "Programming in Scala":
C, C++ and C# are probably not where it came from.
In Java it was the other way around: the flatMap came from Scala into version 1.8 of Java.
I can't say anything about Smalltalk
Ruby definitely has flat_map on Enumerable, but I don't know anything about Ruby, and I don't want to dig into the source code to find out when it was introduced.
Algol and Simula: definitely not.
Strangely enough ML (SML) seems to get by without flatMap, it only has concat (which is essentially the same as flatten). OCaml's lists also seem to have flatten, but no flatMap.
As you've already mentioned, Haskell had all this long ago, but in Haskell it is called bind and written as an operator
Erlang has flatmap on lists, but I'm not sure whether this is the origin, or whether it was introduced later. The problem with Erlang is that it is from 1986, back then there was no github.
I can't say anything about Iswim, Beta and gbeta.
I think it would be fair to say that flatMap has been popularized by Scala, for two reasons:
The flatMap took a prominent role in the design of Scala's collection library, and few years later it turned out to generalize nicely to huge distributed collections (Apache Spark and similar tools)
The flatMap became the favorite toy of everyone who decided to do functional programming on the JVM properly (Scalaz and libraries inspired by Scalaz, like Scala Cats)
To sum it up: the "flatten" terminology has been used in the context of monads since the very beginning. Later, it was combined with map into flatMap, and popularized by Scala, or more specifically by frameworks such as Apache Spark and Scalaz.
flatmap was introduced in Section 2.2.3 Sequences as Conventional Interfaces in "Structure and Interpretation of Computer Programs" as
(define (flatmap proc seq)
(accumulate append nil (map proc seq)))
The first edition of the book appeared in 1985.

What is the difference between functional programming and declarative programming?

I would like understand the difference between functional, and declarative programming.
Can you show me an example where the code is declarative, but not yet functional?
Is it possible to be functional but not declarative, i.e. imperative?
A non-functional declarative language is PROLOG. Programming in PROLOG is stating a number of facts, and then ask questions, which the system tries to verify or deny.
Example:
human(socrates). // "Socrates is a human."
mortal(X) :- human(X). // "If X is a human, then X is mortal" or
// "All humans are mortal."
? mortal(socrates) // Is Socrates mortal?
Yes.
? mortal(X) // Who is mortal?
socrates
? mortal(pythagoras).
No. // since system doesn't know about any human except Socrates
Another well known language that is declarative, but not functional, is SQL.
Note that there are not only no functions as first class values. In the PROLOG example, there are no functions at all! To be sure, both SQL and PROLOG have some built-in functions, but have no way to let you write your own functions. One could think that the rule
mortal(X) :- human(X).
is a function, but it isn't, it is an inference rule. Hence, declarative, non-functional languages.
For the second part of your question: it is certainly possible to write imperative code in functional programming languages. Simon Peyton Jones once stated that he thinks that Haskell is the finest imperative programming language in the world. (And this was only a half joke.)
Example:
main = do
print "Enter a number"
line <- getLine
print (succ (read line :: Int))

Higher-order functions in VHDL or Verilog

I've been programming in more functional-style languages and have gotten to appreciate things like tuples, and higher-order functions such as maps, and folds/aggregates. Do either VHDL or Verilog have any of these kinds of constructs?
For example, is there any way to do even simple things like
divByThreeCount = count (\x -> x `mod` 3 == 0) myArray
or
myArray2 = map (\x -> x `mod` 3) myArray
or even better yet let me define my own higher-level constructs recursively, in either of these languages?
I think you're right that there is a clash between the imperative style of HDLs and the more functional aspects of combinatorial circuits. Describing parallel circuits with languages which are linear in nature is odd, but I think a full blown functional HDL would be a disaster.
The thing with functional languages (I find) is that it's very easy to write code which takes huge resources, either in time, memory, or processing power. That's their power; they can express complexity quite succinctly, but they do so at the expense of resource management. Put that in an HDL and I think you'll have a lot of large, power hungry designs when synthesised.
Take your map example:
myArray2 = map (\x -> x `mod` 3) myArray
How would you like that to synthesize? To me you've described a modulo operator per element of the array. Ignoring the fact that modulo isn't cheap, was that what you intended, and how would you change it if it wasn't? If I start breaking that function up in some way so that I can say "instantiate a single operator and use it multiple times" I lost a lot of the power of functional languages.
...and then we've got retained state. Retained state is everywhere in hardware. You need it. You certainly wouldn't use a purely functional language.
That said, don't throw away your functional design patterns. Combinatorial processes (VHDL) and "always blocks" (Verilog) can be viewed as functions that apply themselves to the data presented at their input. Pipelines can be viewed as chains of functions. Often the way you structure a design looks functional, and can share a lot with the "Actor" design pattern that's popular in Erlang.
So is there stuff to learn from functional programming? Certainly. Do I wish VHDL and Verilog took more from functional languages? Sometimes. The trouble is functional languages get too high level too quickly. If I can't distinguish between "use one instance of f() many times" and "use many instances of f()" then it doesn't do what a Hardware Description Language must do... describe hardware.
Have a look at http://clash-lang.org for an example of a higher-order language that is transpiled into VHDL/Verilog. It allows functions to be passed as arguments, currying etc., even a limited set of recursive data structures (Vec). As you would expect the pure moore function instantiates a stateful Moore machine given step/output functions and a start state. It has this signature:
moore :: (s -> i -> s) -> (s -> o) -> s -> Signal i -> Signal o
Signals model the time-evolved values in sequential logic.
BlueSpec is used by a small com. known by few people only - INTEL
It’s an extension of SystemVerilog (SV) and called BSV
But it’s NOT open - I don’t think you can try use or even learn it without paying BlueSpec.com BIG money
There’s also Lava that’s used by Xilinx but I don’t know if you can use it directly.
Note:
In Vhdl functions (and maybe Verilog too)
CAN’T have time delay (you can’t ask function to wait for clk event)
Vhdl has a std. lib. for complex num. cal.
But in Vhdl you can change or overload a system function like add (“+”) and define your implementation.
You can declare vector or matrix add sub mul or div (+ - * /) function,
use generic size (any) and recursive declarations - most synthesizers will "understand you" and do what you’ve asked.

What is Lambda definability?

While I was reading about lambda calculus, came across the word Lambda definability. Can someone please explain what that is as I couldn't find any good resources on that.
Thanks
More generally, there is a line of research seeking to characterize "lambda definability" over a broad class of languages. "lambda definability" itself is typically relative to a semantics of a language given in terms of sets. For a type T in our language, write |T| for its interpretation as a set. Now, take an element of |T| -- call it e. We want to know if there is a term in our language -- call it x : T (x of type T), such that |x| is e. If there is such a term, then we say that t is lambda-definable.
Now, in our perfect world, when we interpret a language into sets, we would like to say that the sets associated with each type are precisely those that contain the lambda-definable elements of that type and only the lambda-definable elements (completeness). It would also be nice, perhaps to say that we can provide an algorithm to determine if a claimed element of a set has an associated lambda term (decidability).
Now, often we don't just model into sets, but into other funny mathematical constructions. And we don't model just from the lambda calculus, but from other related systems such as Plotkin's PCF or the like. But the property under study is typically still called "lambda-definability".
After decades of research there are still many open problems and questions in this regard -- while certain lower-order terms have been shown to have decidable lambda-definability (the classic results involve terms up to second-order), many terms do not yield so easily. This paper ("The Undecidability of lambda-Definability" by Ralph Loader) gives an important such undecidability result and characterizes some consequences: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.6860
See the Church-Turing thesis, where lambda-definable functions (from Church) are those that give us "effectively computable" functions. Turing showed that programs implementable on a Turing machine are equivalent to lambda-definable functions.

Is there a strongly typed programming language which allows you to define new operators?

I am currently looking for a programming language to write a math class in. I know that there are lots and lots of them everywhere around, but since I'm going to start studying math next semester, I thought this might be a good way to get a deeper insight in to what I've learned.
Thanks for your replys.
BTW: If you are wondering what I wanted to ask:
"Is there a strongly typed programming language which allows you to define new operators?"
Like EFraim said, Haskell makes this pretty easy:
% ghci
ghci> let a *-* b = (a*a) - (b*b)
ghci> :type (*-*)
(*-*) :: (Num a) => a -> a -> a
ghci> 4 *-* 3
7
ghci> 1.2 *-* 0.9
0.6299999999999999
ghci> (*-*) 5 3
16
ghci> :{
let gcd a b | a > b = gcd (a - b) b
| b > a = gcd a (b - a)
| otherwise = a
:}
ghci> :type gcd
gcd :: (Ord a, Num a) => a -> a -> a
ghci> gcd 3 6
3
ghci> gcd 12 11
1
ghci> 18 `gcd` 12
6
You can define new infix operators (symbols only) using an infix syntax. You can then use
them as infix operators, or enclose them in parens to use them as a normal function.
You can also use normal functions (letters, numbers, underscores and single-quotes) as operators
by enclosing them in backticks.
Well, you can redefine a fixed set of operators in many languages, like C++ or C#. Others, like F# or Scala allow you to define even new operators (even as infix ones) which might be even nicer for math-y stuff.
Maybe Haskell? Allows you to define arbitrary infix operators.
Ted Neward wrote a series of article on Scala aimed at Java developers, and he finished it off by demonstrating how to write a mathematical domain language in Scala (which, incidentally, is a statically-typed language)
Part 1
Part 2
Part 3
In C++ you can define operators that work on other classes, but I don't think other primitive types like ints since they can't have instance methods. You could either make your own number class in C++ and redefine ALL the operators, including + * etc.
To make new operators on primitive types you have to turn to functional programming (it seems from the other answers). This is fine, just keep in mind that functional programming is very different from OOP. But it will be a great new challenge and functional programming is great for math as it comes from lambda calc. Learning functional programming will teach you different skills and help you greatly with math and programming in general. :D
good luck!
Eiffel allows you to define new operators.
http://dev.eiffel.com
Inasmuch as the procedure you apply to the arguments in a Lisp combination is called an “operator,” then yeah, you can define new operators till the cows come home.
Ada has support for overriding infix operators: here is the reference manual chapter.
Unfortunately you can't create your own new operators, it seems you can only override the existing ones.
type wobble is new integer range 23..89;
function "+" (A, B: wobble) return wobble is
begin
...
end "+";
Ada is not a hugely popular language, it has to be said, but as far as strong typing goes, you can't get much stronger.
EDIT:
Another language which hasn't been mentioned yet is D. It also is a strongly typed language, and supports operator overloading. Again, it doesn't support user-defined infix operators.
From http://www.digitalmars.com/d/1.0/rationale.html
Why not allow user definable operators?
These can be very useful for attaching new infix operations to various unicode symbols. The trouble is that in D, the tokens are supposed to be completely independent of the semantic analysis. User definable operators would break that.
Both ocaml and f# have infix operators. They have a special set of characters that are allowed within their syntax, but both can be used to manipulate other symbols to use any function infix (see the ocaml discussion).
I think you should probably think deeply about why you want to use this feature. It seems to me that there are much more important considerations when choosing a language.
I can only think of one possible meaning for the word "operator" in this context, which is just syntactic sugar for a function call, e.g. foo + bar would be translated as a call to a function +(a, b).
This is sometimes useful, but not often. I can think of very few instances where I have overloaded/defined an operator.
As noted in the other answers, Haskell does allow you to define new infix operators. However, a purely functional language with lazy evaluation can be a bit of a mouthful. I would probably recommend SML over Haskell, if you feel like trying on a functional language for the first time. The type system is a bit simpler, you can use side-effects and it is not lazy.
F# is also very interesting and also features units of measure, which AFAIK is unique to that language. If you have a need for the feature it can be invaluable.
Off the top of my head I can't think of any statically typed imperative languages with infix operators, but you might want to use a functional language for math programming anyway, since it is much easier to prove facts about a functional program.
You might also want to create a small DSL if syntax issues like infix operators are so important to you. Then you can write the program in whatever language you want and still specify the math in a convenient way.
What do you mean by strong typing? Do you mean static typing (where everything has a type that is known at compile time, and conversions are restricted) or strong typing (everything has a type known at run time, and conversions are restricted)?
I'd go with Common Lisp. It doesn't actually have operators (for example, adding a and b is (+ a b)), but rather functions, which can be defined freely. It has strong typing in that every object has a definite type, even if it can't be known at compile time, and conversions are restricted. It's a truly great language for exploratory programming, and it sounds like that's what you'll be doing.
Ruby does.
require 'rubygems'
require 'superators'
class Array
superator "<---" do |operand|
self << operand.reverse
end
end
["jay"] <--- "spillihp"
You can actually do what you need with C# through operator overloading.
Example:
public static Complex operator -(Complex c)
{
Complex temp = new Complex();
temp.x = -c.x;
temp.y = -c.y;
return temp;
}

Resources