While I was reading about lambda calculus, came across the word Lambda definability. Can someone please explain what that is as I couldn't find any good resources on that.
Thanks
More generally, there is a line of research seeking to characterize "lambda definability" over a broad class of languages. "lambda definability" itself is typically relative to a semantics of a language given in terms of sets. For a type T in our language, write |T| for its interpretation as a set. Now, take an element of |T| -- call it e. We want to know if there is a term in our language -- call it x : T (x of type T), such that |x| is e. If there is such a term, then we say that t is lambda-definable.
Now, in our perfect world, when we interpret a language into sets, we would like to say that the sets associated with each type are precisely those that contain the lambda-definable elements of that type and only the lambda-definable elements (completeness). It would also be nice, perhaps to say that we can provide an algorithm to determine if a claimed element of a set has an associated lambda term (decidability).
Now, often we don't just model into sets, but into other funny mathematical constructions. And we don't model just from the lambda calculus, but from other related systems such as Plotkin's PCF or the like. But the property under study is typically still called "lambda-definability".
After decades of research there are still many open problems and questions in this regard -- while certain lower-order terms have been shown to have decidable lambda-definability (the classic results involve terms up to second-order), many terms do not yield so easily. This paper ("The Undecidability of lambda-Definability" by Ralph Loader) gives an important such undecidability result and characterizes some consequences: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.36.6860
See the Church-Turing thesis, where lambda-definable functions (from Church) are those that give us "effectively computable" functions. Turing showed that programs implementable on a Turing machine are equivalent to lambda-definable functions.
Related
I have scribbled the term "retracting in OCaml" in a small space in my notebook and now I can't seem to recollect what it was about nor can I find anything about it on the internet.
Does this term really exist or is it my lecturer's own notation for some property of OCaml. My classmates also don't seem to remember what it was about so I just want to confirm if I was dreaming or not.
Another possible explanation: in math, a retraction is a left inverse of a morphism (see this definition). In particular, a parser can be seen as a retraction w.r.t. a given pretty-printer: start from an abstract syntax tree (AST) and pretty-print it, then parsing the resulting source code should yield the original AST (while the opposite is not necessarily true). It doesn't have much to do with OCaml per se but it is linked to an algebraic view (of compiling) which is quite common in functional programming.
I am looking for a simple method to assign a number to a mathematical expression, say between 0 and 1, that conveys how simplified that expression is (being 1 as fully simplified). For example:
eval('x+1') should return 1.
eval('1+x+1+x+x-5') should returns some value less than 1, because it is far from being simple (i.e., it can be further simplified).
The parameter of eval() could be either a string or an abstract syntax tree (AST).
A simple idea that occurred to me was to count the number of operators (?)
EDIT: Let simplified be equivalent to how close a system is to the solution of a problem. E.g., given an algebra problem (i.e. limit, derivative, integral, etc), it should assign a number to tell how close it is to the solution.
The closest metaphor I can come up with it how a maths professor would look at an incomplete problem and mentally assess it in order to tell how close the student is to the solution. Like in a math exam, were the student didn't finished a problem worth 20 points, but the professor assigns 8 out of 20. Why would he come up with 8/20, and can we program such thing?
I'm going to break a stack-overflow rule and post this as an answer instead of a comment, because not only I'm pretty sure the answer is you can't (at least, not the way you imagine), but also because I believe it can be educational up to a certain degree.
Let's assume that a criteria of simplicity can be established (akin to a normal form). It seems to me that you are very close to trying to solve an analogous to entscheidungsproblem or the halting problem. I doubt that in a complex rule system required for typical algebra, you can find a method that gives a correct and definitive answer to the number of steps of a series of term reductions (ipso facto an arbitrary-length computation) without actually performing it. Such answer would imply knowing in advance if such computation could terminate, and so contradict the fact that automatic theorem proving is, for any sufficiently powerful logic capable of representing arithmetic, an undecidable problem.
In the given example, the teacher is actually either performing that computation mentally (going step by step, applying his own sequence of rules), or gives an estimation based on his experience. But, there's no generic algorithm that guarantees his sequence of steps are the simplest possible, nor that his resulting expression is the simplest one (except for trivial expressions), and hence any quantification of "distance" to a solution is meaningless.
Wouldn't all this be true, your problem would be simple: you know the number of steps, you know how many steps you've taken so far, you divide the latter by the former ;-)
Now, returning to the criteria of simplicity, I also advice you to take a look on Hilbert's 24th problem, that specifically looked for a "Criteria of simplicity, or proof of the greatest simplicity of certain proofs.", and the slightly related proof compression. If you are philosophically inclined to further understand these subjects, I would suggest reading the classic Gödel, Escher, Bach.
Further notes: To understand why, consider a well-known mathematical artefact called the Mandelbrot fractal set. Each pixel color is calculated by determining if the solution to the equation z(n+1) = z(n)^2 + c for any specific c is bounded, that is, "a complex number c is part of the Mandelbrot set if, when starting with z(0) = 0 and applying the iteration repeatedly, the absolute value of z(n) remains bounded however large n gets." Despite the equation being extremely simple (you know, square a number and sum a constant), there's absolutely no way to know if it will remain bounded or not without actually performing an infinite number of iterations or until a cycle is found (disregarding complex heuristics). In this sense, every fractal out there is a rough approximation that typically usages an escape time algorithm as an heuristic to provide an educated guess whether the solution will be bounded or not.
May anyone give me an example how we can improve our code reusability using algebraic structures like groups, monoids and rings? (or how can i make use of these kind of structures in programming, knowing at least that i didn't learn all that theory in highschool for nothing).
I heard this is possible but i can't figure out a way applying them in programming and genereally applying hardcore mathematics in programming.
It is not really the mathematical stuff that helps as is the mathematical thinking. Abstraction is the key in programming. Transforming real live concepts into numbers and relations is what we do every day. Algebra is the mother of all, algebra is the set of rules that defines correctness, it is the highest level of abstraction, so, understanding algebra means you can think more clear, more faster, more efficient. Commencing from Sets theory to Category Theory, Domain Theory etc everything comes from practical challenges, abstraction and generalization requirements.
In common practice you will not need to actually know these, although if you are thinking of developing stuff like AI Agents, programming languages, fundamental concepts and tools then they are a must.
In functional programming, esp. Haskell, it's common to structure programs that transform states as monads. Doing so means you can reuse generic algorithms on monads in very different programs.
The C++ standard template library features the concept of a monoid. The idea is again that generic algorithms may require an operation to satisfies the axioms of monoids for their correctness.
E.g., if we can prove the type T we're operating on (numbers, string, whatever) is closed under the operation, we know we won't have to check for certain errors; we always get a valid T back. If we can prove that the operation is associative (x * (y * z) = (x * y) * z), then we can reuse the fork-join architecture; a simple but way of parallel programming implemented in various libraries.
Computer science seems to get a lot of milage out of category theory these days. You get monads, monoids, functors -- an entire bestiary of mathematical entities that are being used to improve code reusability, harnessing the abstraction of abstract mathematics.
Lists are free monoids with one generator, binary trees are groups. You have either the finite or infinite variant.
Starting points:
http://en.wikipedia.org/wiki/Algebraic_data_type
http://en.wikipedia.org/wiki/Initial_algebra
http://en.wikipedia.org/wiki/F-algebra
You may want to learn category theory, and the way category theory approaches algebraic structures: it is exactly the way functional programming languages approach data structures, at least shapewise.
Example: the type Tree A is
Tree A = () | Tree A | Tree A * Tree A
which reads as the existence of a isomorphism (*) (I set G = Tree A)
1 + G + G x G -> G
which is the same as a group structure
phi : 1 + G + G x G -> G
() € 1 -> e
x € G -> x^(-1)
(x, y) € G x G -> x * y
Indeed, binary trees can represent expressions, and they form an algebraic structure. An element of G reads as either the identity, an inverse of an element or the product of two elements. A binary tree is either a leaf, a single tree or a pair of trees. Note the similarity in shape.
(*) as well as a universal property, but they are two of them (finite trees or infinite lazy trees) so I won't dwelve into details.
As I had no idea this stuff existed in the computer science world, please disregard this answer ;)
I don't think the two fields (no pun intended) have any overlap. Rings/fields/groups deal with mathematical objects. Consider a part of the definition of a field:
For every a in F, there exists an element −a in F, such that a + (−a) = 0. Similarly, for any a in F other than 0, there exists an element a^−1 in F, such that a · a^−1 = 1. (The elements a + (−b) and a · b^−1 are also denoted a − b and a/b, respectively.) In other words, subtraction and division operations exist.
What the heck does this mean in terms of programming? I surely can't have an additive inverse of a list object in Python (well, I could just destroy the object, but that is like the multiplicative inverse. I guess you could get somewhere trying to define a Python-ring, but it just won't work out in the end). Don't even think about dividing lists...
As for code readability, I have absolutely no idea how this can even be applied, so this application is irrelevant.
This is my interpretation, but being a mathematics major probably makes me blind to other terminology from different fields (you know which one I'm talking about).
Monoids are ubiquitous in programming. In some programming languages, eg. Haskell, we can make monoids explicit http://blog.sigfpe.com/2009/01/haskell-monoids-and-their-uses.html
What is the most minimal functional programming language?
It depends on what you mean by minimal.
To start with, the ancestor of functional languages is, first and foremost, mathematical logic. The computational use of certain logics came after the fact. In a sense, many mathematical systems (the cores of which are usually quite minimal) could be called functional languages. But I doubt that's what you're after!
Best known is Alonzo Church's lambda calculus, of which there are variants and descendants:
The simplest form is what's called the untyped lambda calculus; this contains nothing but lambda abstractions, with no restrictions on their use. The creation of data structures using only anonymous functions is done with what's called Church encoding and represents data by fundamental operations on it; the number 5 becomes "repeat something 5 times", and so on.
Lisp-family languages are little more than untyped lambda calculus, augmented with atomic values, cons cells, and a handful of other things. I'd suspect Scheme is the most minimalist here, as if memory serves me it was created first as a teaching language.
The original purpose of the lambda calculus, that of describing logical proofs, failed when the untyped form was shown to be inconsistent, which is a polite term for "lets you prove that false is true". (Historical trivia: the paper proving this, which was a significant thing at the time, did so by writing a logical proof that, in computational terms, went into an infinite loop.) Anyway, the use as a logic was recovered by introducing typed lambda calculus. These tend not to be directly useful as programming languages, however, particularly since being logically sound makes the language not Turing-complete.
However, similarly to how Lisps derive from untyped lambda calculus, a typed lambda calculus extended with built-in recursion, algebraic data types, and a few other things gets you the extended ML-family of languages. These tend to be pretty minimal
at heart, with syntactic constructs having straightforward translations to lambda terms in many cases. Besides the obvious ML dialects, this also includes Haskell and a few other languages. I'm not aware of any especially minimalist typed functional languages, however; such a language would likely suffer from poor usability far worse than a minimalist untyped language.
So as far as lambda calculus variants go, the pure untyped lambda calculus with no extra features is Turing-complete and about as minimal as you can get!
However, arguably more minimal is to eliminate the concept of "variables" entirely--in fact, this was originally done to simplify meta-mathematical proofs about logical systems, if memory serves me--and use only higher-order functions called combinators. Here we have:
Combinatory logic itself, as originally invented by Moses Schönfinkel and developed extensively by Haskell Curry. Each combinator is defined by a simple substitution rule, for instance Sxyz = xz(yz). The lowercase letters are used like variables in this definition, but keep in mind that combinatory logic itself doesn't use variables, or assign names to anything at all. Combinatory logic is minimal, to be sure, but not too friendly as a programming language. Best-known is the SK combinator base. S is defined as in the example above; K is Kxy = x. Those two combinators alone suffice to make it Turing-complete! This is almost frighteningly minimal.
Unlambda is a language based on SK combinators, extending it with a few extra combinators with special properties. Less minimal, but lets you write "Hello World".
Even two combinators is more than you need, though. Various one-combinator bases exist; perhaps the best known is the iota Combinator, defined as ιx = xSK, which is used in a minimalist language also called Iota
Also of some note is Lazy K, which is distinguished from Unlambda by not introducing additional combinators, having no side effects, and using lazy evaluation. Basically, it's the Haskell of the combinator-based-esoteric-language world. It supports both the SK base, as well as the iota combinator.
Which of those strikes you as most "minimal" is probably a matter of taste.
The arguably most minimal functional languages are iota and Jot, because they use only one combinator (while unlambda needs two). Here is a short explanation: http://web.archive.org/web/20061105204247/http://ling.ucsd.edu/~barker/Iota/
I'd imagine the most minimal functional "programming language" would be lambda calculus.
BrainF*ck is a simple, easy to use programming language. Here's a quick rundown.
Imagine you have a near-infinite range of boxes, each empty. Luckily, you are not alone! You can move back and forth along the line, put things in them, and take them out. Though quite basic, with enough time you can do about anything: http://www.iwriteiam.nl/Ha_bf_inter.html. Here are the commands.
+ | add one to currrent box
- | take one from current box
> | move one box to the right
< | move one box to the left
[] | loop
. | print current value
, | input current value
other stuff to look at:
P" | simplified BF
language f | newer simplified BF
http://www2.gvsu.edu/miljours/bf.html | cool BF stuff/intro
https://www.esolangs.org/wiki/Language_list | list of similar langs/variants
An esoteric programming language (a.k.a. esolang) is a programming language designed to test the boundaries of computer programming language design, as a proof of concept, as software art, as a hacking interface to another language (particularly functional programming or procedural programminglanguages), or as a joke. The use of esotericdistinguishes these languages from programming languages that working developers use to write software. Usually, an esolang's creators do not intend the language to be used for mainstream programming, although some esoteric features, such as visuospatial syntax, have inspired practical applications in the arts. Such languages are often popular among hackers and hobbyists.
In Object Oriented Paradigm, I would create an object/conceptual model before I start implementing it using OO language.
Is there anything parallel to object model in functional programming. Is it called functional model? or we create the same conceptual model in both the paradigm before implementing it in one of the language..
Are there articles/books where I can read about functional model in case it exist?
or to put it in different way... even if we are using functional programming language, would we start with object model?
In fact there is. There is a form of specification for functional languages based on Abstract Data Types called algebraic specification. Their behavior is very similar to that of objects in some ways, however the constructs are logical and mathematical, and are immutable like functional constructs.
A particular functional specification language that's used in the Algorithms and Data Structures class in the University of Buenos Aires has generators, observers, and additional operations.
A generator is an expression that is both an instance and a possible composition of the data type.
For example, for a binary tree (ADT bt), we have null nodes, and binary nodes. So we would have the generators:
-nil
-bin(left:bt, root: a, right:bt)
Where left is an instance of a bt, the root is a generic value, and right is another bt.
So, nil is a valid form of a bt, but bin(bin(nil,1,nil),2,nil) is also valid, representing a binary tree with a root node with a value of 2, a left child node with a value of 1, and a null child right node.
So for a function that say, calculates the number of nodes in a tree, you define an observer of the ADT, and you define a set of axioms which map to each generator.
So, for example:
numberOfNodes(nil) == 0
numberOfNodes(bin(left,x,right))== 1 + numberOfNodes(left) + numberOfNodes(right)
This has the advantage of using recursive definitions of operations, and has the more, formally interesting property that you can use something called structural induction to PROVE that your specification is correct (yes, you demonstrate that your algorithm will produce the proper result).
This is a fairly academic topic rarely seen outside of academic circles, but it's worth it to get an insight on program design that may change the way you think about algorithms and data structures.
The proper bibliography includes:
Bernot, G., Bidoit, M., and Knapik, T.
1995. Observational specifications and the indistinguishability assumption.
Theor. Comput. Sci. 139, 1-2 (Mar.
1995), 275-314. DOI=
http://dx.doi.org/10.1016/0304-3975(94)00017-D
Guttag, J. V. and Horning, J. J. 1993.
Larch: Languages and Tools for Formal
Specification. Springer-Verlag New
York, Inc. Abstraction and
Specification in Software Development,
Barbara Liskov y John Guttag, MIT
Press, 1986.
Fundamentals of Algebraic
Specification 1. Equations and Initial
Semantics. H. Ehrig y B. Mahr
Springer-Verlag, Berlin, Heidelberg,
New York, Tokyo, Germany, 1985.
With corresponding links:
http://www.cs.st-andrews.ac.uk/~ifs/Resources/Notes/FormalSpec/AlgebraicSpec.pdf
http://nms.lcs.mit.edu/larch/pub/larchBook.ps
It's a heck of an interesting topic.
In both OO and FP paradigms, you form your domain model (the problem you're solving) and then create objects in your program to mirror the domain objects. There are some differences, in that how the program objects mirror the domain objects is influenced by the paradigm and langauge you're using. Some examples (in Haskell):
from finance: Composing Contracts
from word processing: Bridging the Algorithm Gap
a simple web server: http://lstephen.wordpress.com/2008/02/14/a-simple-haskell-web-server/
music: http://www.haskell.org/haskore/
A flowchart and/or process model/diagram can be used as a functional model for non OO programs. But it still doesn't give the sense of boundaries similar to that of the OO model.
http://en.wikipedia.org/wiki/Functional_model