What problems does curry's choice solve? - functional-programming

I am familiar with what choice operator ((?)) does, it takes two arguments and matches to both of them. We could define it as follows:
a?_=a
_?b=b
This can be used to introduce non-determinism between two values. However what I don't understand why we would want to do that.
What would be an example of a problem that could be solved by using (?)?

One example that is typically used to motivate non-determinism is a function that computes all permutations of a list.
insert e [] = [e]
insert e (x:xs) = (e : x : xs) ? (x : insert e xs)
perm [] = []
perm (x:xs) = insert x (perm xs)
The nice thing here is that you do not need to specify how you want to enumerate all lists, the search algorithm that underlies a logic programming language like Curry (per default it's depth-first-search) does the job for you. You merely give a specification of how a list in your result should look like.
Hopefully, you find some more realistic examples in the following papers.
New Functional Logic Design Patterns
Functional Logic Design Patterns
Edit: As I recently published work on that topic, I want to add the application of probabilistic programming. In the paper we show that using non-determinism to model probabilistic values can have advantages to list-based approaches with respect to pruning of the search space. More precisely, when we perform a query on the probabilistic values, i.e., filter a distribution based on a predicate, the non-determinism behaves less strict that an list-based approaches and can prune the search space.

Related

Correct way of writing recursive functions in CLP(R) with Prolog

I am very confused in how CLP works in Prolog. Not only do I find it hard to see the benefits (I do see it in specific cases but find it hard to generalise those) but more importantly, I can hardly make up how to correctly write a recursive predicate. Which of the following would be the correct form in a CLP(R) way?
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
factorial(PrevN, NewF),
F = N * NewF}.
or
factorial(0, 1).
factorial(N, F):- {
N > 0,
PrevN = N - 1,
F = N * NewF},
factorial(PrevN, NewF).
In other words, I am not sure when I should write code outside the constraints. To me, the first case would seem more logical, because PrevN and NewF belong to the constraints. But if that's true, I am curious to see in which cases it is useful to use predicates outside the constraints in a recursive function.
There are several overlapping questions and issues in your post, probably too many to coherently address to your complete satisfaction in a single post.
Therefore, I would like to state a few general principles first, and then—based on that—make a few specific comments about the code you posted.
First, I would like to address what I think is most important in your case:
LP ⊆ CLP
This means simply that CLP can be regarded as a superset of logic programming (LP). Whether it is to be considered a proper superset or if, in fact, it makes even more sense to regard them as denoting the same concept is somewhat debatable. In my personal view, logic programming without constraints is much harder to understand and much less usable than with constraints. Given that also even the very first Prolog systems had a constraint like dif/2 and also that essential built-in predicates like (=)/2 perfectly fit the notion of "constraint", the boundaries, if they exist at all, seem at least somewhat artificial to me, suggesting that:
LP ≈ CLP
Be that as it may, the key concept when working with CLP (of any kind) is that the constraints are available as predicates, and used in Prolog programs like all other predicates.
Therefore, whether you have the goal factorial(N, F) or { N > 0 } is, at least in principle, the same concept: Both mean that something holds.
Note the syntax: The CLP(ℛ) constraints have the form { C }, which is {}(C) in prefix notation.
Note that the goal factorial(N, F) is not a CLP(ℛ) constraint! Neither is the following:
?- { factorial(N, F) }.
ERROR: Unhandled exception: type_error({factorial(_3958,_3960)},...)
Thus, { factorial(N, F) } is not a CLP(ℛ) constraint either!
Your first example therefore cannot work for this reason alone already. (In addition, you have a syntax error in the clause head: factorial (, so it also does not compile at all.)
When you learn working with a constraint solver, check out the predicates it provides. For example, CLP(ℛ) provides {}/1 and a few other predicates, and has a dedicated syntax for stating relations that hold about floating point numbers (in this case).
Other constraint solver provide their own predicates for describing the entities of their respective domains. For example, CLP(FD) provides (#=)/2 and a few other predicates to reason about integers. dif/2 lets you reason about any Prolog term. And so on.
From the programmer's perspective, this is exactly the same as using any other predicate of your Prolog system, whether it is built-in or stems from a library. In principle, it's all the same:
A goal like list_length(Ls, L) can be read as: "The length of the list Ls is L."
A goal like { X = A + B } can be read as: The number X is equal to the sum of A and B. For example, if you are using CLP(Q), it is clear that we are talking about rational numbers in this case.
In your second example, the body of the clause is a conjunction of the form (A, B), where A is a CLP(ℛ) constraint, and B is a goal of the form factorial(PrevN, NewF).
The point is: The CLP(ℛ) constraint is also a goal! Check it out:
?- write_canonical({a,b,c}).
{','(a,','(b,c))}
true.
So, you are simply using {}/1 from library(clpr), which is one of the predicates it exports.
You are right that PrevN and NewF belong to the constraints. However, factorial(PrevN, NewF) is not part of the mini-language that CLP(ℛ) implements for reasoning over floating point numbers. Therefore, you cannot pull this goal into the CLP(ℛ)-specific part.
From a programmer's perspective, a major attraction of CLP is that it blends in completely seamlessly into "normal" logic programming, to the point that it can in fact hardly be distinguished at all from it: The constraints are simply predicates, and written down like all other goals.
Whether you label a library predicate a "constraint" or not hardly makes any difference: All predicates can be regarded as constraints, since they can only constrain answers, never relax them.
Note that both examples you post are recursive! That's perfectly OK. In fact, recursive predicates will likely be the majority of situations in which you use constraints in the future.
However, for the concrete case of factorial, your Prolog system's CLP(FD) constraints are likely a better fit, since they are completely dedicated to reasoning about integers.

What are practical examples of the higher-order functions foldl and foldr?

The typical academic example is to sum a list.
Are there real world examples of the use of fold that will shed light on its utility ?
fold is perhaps the most fundamental operation on sequences. Asking for its utility is like asking for the utility of a for loop in an imperative language.
Given a list (or array, or tree, or ..), a starting value, and a function, the fold operator reduces the list to a single result. It is also the natural catamorphism (destructor) for lists.
Any operations that take a list as input, and produce an output after inspecting the elements of the list can be encoded as folds. E.g.
sum = fold (+) 0
length = fold (λx n → 1 + n) 0
reverse = fold (λx xs → xs ++ [x]) []
map f = fold (λx ys → f x : ys) []
filter p = fold (λx xs → if p x then x : xs else xs) []
The fold operator is not specific to lists, but can be generalised in a uniform way to ‘regular’ datatypes.
So, as one of the most fundamental operations on a wide variety of data types, it certainly does have some use out there. Being able to recognize when an algorithm can be described as a fold is a useful skill that will lead to cleaner code.
References:
A tutorial on the universality and expressiveness of fold
Writing foldl in terms of foldr
On folds
Lots And Lots Of foldLeft Examples lists the following functions:
sum
product
count
average
last
penultimate
contains
get
to string
reverse
unique
to set
double
insertion sort
pivot (part of quicksort)
encode (count consecutive elements)
decode (generate consecutive elements)
group (into sublists of even sizes)
My lame answer is that:
foldr is for reducing the problem to the primitive case and then assembling back up (behaves as a non tail-recursion)
foldl is for reducing the problem and assembling the solution at every step, where at the primitive case you have the solution ready (bahaves as a tail recursion / iteration)
This question reminded me immediately of a talk by Ralf Lämmel Going Bananas (as the rfold operator notation looks like a banana (| and |)). There are quite illustrative examples of mapping recursion to folds and even one fold to the other.
The classic paper (that is quite difficult at first) is Functional Programming with Bananas, Lenses,. Envelopes and Barbed Wire named after the look of other operators.

New to OCaml: How would I go about implementing Gaussian Elimination?

I'm new to OCaml, and I'd like to implement Gaussian Elimination as an exercise. I can easily do it with a stateful algorithm, meaning keep a matrix in memory and recursively operating on it by passing around a reference to it.
This statefulness, however, smacks of imperative programming. I know there are capabilities in OCaml to do this, but I'd like to ask if there is some clever functional way I haven't thought of first.
OCaml arrays are mutable, and it's hard to avoid treating them just like arrays in an imperative language.
Haskell has immutable arrays, but from my (limited) experience with Haskell, you end up switching to monadic, mutable arrays in most cases. Immutable arrays are probably amazing for certain specific purposes. I've always imagined you could write a beautiful implementation of dynamic programming in Haskell, where the dependencies among array entries are defined entirely by the expressions in them. The key is that you really only need to specify the contents of each array entry one time. I don't think Gaussian elimination follows this pattern, and so it seems it might not be a good fit for immutable arrays. It would be interesting to see how it works out, however.
You can use a Map to emulate a matrix. The key would be a pair of integers referencing the row and column. You'll want to use your own get x y function to ensure x < n and y < n though, instead of accessing the Map directly. (edit) You can use the compare function in Pervasives directly.
module OrderedPairs = struct
type t = int * int
let compare = Pervasives.compare
end
module Pairs = Map.Make (OrderedPairs)
let get_ n set x y =
assert( x < n && y < n );
Pairs.find (x,y) set
let set_ n set x y v =
assert( x < n && y < n );
Pairs.add (x,y) set v
Actually, having a general set of functions (get x y and set x y at a minimum), without specifying the implementation, would be an even better option. The functions then can be passed to the function, or be implemented in a module through a functor (a better solution, but having a set of functions just doing what you need would be a first step since you're new to OCaml). In this way you can use a Map, Array, Hashtbl, or a set of functions to access a file on the hard-drive to implement the matrix if you wanted. This is the really important aspect of functional programming; that you trust the interface over exploiting the side-effects, and not worry about the underlying implementation --since it's presumed to be pure.
The answers so far are using/emulating mutable data-types, but what does a functional approach look like?
To see, let's decompose the problem into some functional components:
Gaussian elimination involves a sequence of row operations, so it is useful first to define a function taking 2 rows and scaling factors, and returning the resultant row operation result.
The row operations we want should eliminate a variable (column) from a particular row, so lets define a function which takes a pair of rows and a column index and uses the previously defined row operation to return the modified row with that column entry zero.
Then we define two functions, one to convert a matrix into triangular form, and another to back-substitute a triangular matrix to the diagonal form (using the previously defined functions) by eliminating each column in turn. We could iterate or recurse over the columns, and the matrix could be defined as a list, vector or array of lists, vectors or arrays. The input is not changed, but a modified matrix is returned, so we can finally do:
let out_matrix = to_diagonal (to_triangular in_matrix);
What makes it functional is not whether the data-types (array or list) are mutable, but how they they are used. This approach may not be particularly 'clever' or be the most efficient way to do Gaussian eliminations in OCaml, but using pure functions lets you express the algorithm cleanly.

Haskell "collections" language design

Why is the Haskell implementation so focused on linked lists?
For example, I know Data.Sequence is more efficient
with most of the list operations (except for the cons operation), and is used a lot;
syntactically, though, it is "hardly supported". Haskell has put a lot of effort into functional abstractions, such as the Functor and the Foldable class, but their syntax is not compatible with that of the default list.
If, in a project I want to optimize and replace my lists with sequences - or if I suddenly want support for infinite collections, and replace my sequences with lists - the resulting code changes are abhorrent.
So I guess my wondering can be made concrete in questions such as:
Why isn't the type of map equal to (Functor f) => (a -> b) -> f a -> f b?
Why can't the [] and (:) functions be used for, for example, the type in Data.Sequence?
I am really hoping there is some explanation for this, that doesn't include the words "backwards compatibility" or "it just grew that way", though if you think there isn't, please let me know. Any relevant language extensions are welcome as well.
Before getting into why, here's a summary of the problem and what you can do about it. The constructors [] and (:) are reserved for lists and cannot be redefined. If you plan to use the same code with multiple data types, then define or choose a type class representing the interface you want to support, and use methods from that class.
Here are some generalized functions that work on both lists and sequences. I don't know of a generalization of (:), but you could write your own.
fmap instead of map
mempty instead of []
mappend instead of (++)
If you plan to do a one-off data type replacement, then you can define your own names for things, and redefine them later.
-- For now, use lists
type List a = [a]
nil = []
cons x xs = x : xs
{- Switch to Seq in the future
-- type List a = Seq a
-- nil = empty
-- cons x xs = x <| xs
-}
Note that [] and (:) are constructors: you can also use them for pattern matching. Pattern matching is specific to one type constructor, so you can't extend a pattern to work on a new data type without rewriting the pattern-matchign code.
Why there's so much list-specific stuff in Haskell
Lists are commonly used to represent sequential computations, rather than data. In an imperative language, you might build a Set with a loop that creates elements and inserts them into the set one by one. In Haskell, you do the same thing by creating a list and then passing the list to Set.fromList. Since lists so closely match this abstraction of computation, they have a place that's unlikely to ever be superseded by another data structure.
The fact remains that some functions are list-specific when they could have been generic. Some common functions like map were made list-specific so that new users would have less to learn. In particular, they provide simpler and (it was decided) more understandable error messages. Since it's possible to use generic functions instead, the problem is really just a syntactic inconvenience. It's worth noting that Haskell language implementations have very little list-speficic code, so new data structures and methods can be just as efficient as the "built-in" ones.
There are several classes that are useful generalizations of lists:
Functor supplies fmap, a generalization of map.
Monoid supplies methods useful for collections with list-like structure. The empty list [] is generalized to other containers by mempty, and list concatenation (++) is generalized to other containers by mappend.
Applicative and Monad supply methods that are useful for interpreting collections as computations.
Traversable and Foldable supply useful methods for running computations over collections.
Of these, only Functor and Monad were in the influential Haskell 98 spec, so the others have been overlooked to varying degrees by library writers, depending on when the library was written and how actively it was maintained. The core libraries have been good about supporting new interfaces.
I remember reading somewhere that map is for lists by default since newcomers to Haskell would be put off if they made a mistake and saw a complex error about "Functors", which they have no idea about. Therefore, they have both map and fmap instead of just map.
EDIT: That "somewhere" is the Monad Reader Issue 13, page 20, footnote 3:
3You might ask why we need a separate map function. Why not just do away with the current
list-only map function, and rename fmap to map instead? Well, that’s a good question. The
usual argument is that someone just learning Haskell, when using map incorrectly, would much
rather see an error about lists than about Functors.
For (:), the (<|) function seems to be a replacement. I have no idea about [].
A nitpick, Data.Sequence isn't more efficient for "list operations", it is more efficient for sequence operations. That said, a lot of the functions in Data.List are really sequence operations. The finger tree inside Data.Sequence has to do quite a bit more work for a cons (<|) equivalent to list (:), and its memory representation is also somewhat larger than a list as it is made from two data types a FingerTree and a Deep.
The extra syntax for lists is fine, it hits the sweet spot at what lists are good at - cons (:) and pattern-matching from the left. Whether or not sequences should have extra syntax is further debate, but as you can get a very long way with lists, and lists are inherently simple, having good syntax is a must.
List isn't an ideal representation for Strings - the memory layout is inefficient as each Char is wrapped with a constructor. This is why ByteStrings were introduced. Although they are laid out as an array ByteStrings have to do a bit of administrative work - [Char] can still be competitive if you are using short strings. In GHC there are language extensions to give ByteStrings more String-like syntax.
The other major lazy functional Clean has always represented strings as byte arrays, but its type system made this more practical - I believe the ByteString library uses unsafePerfomIO under the hood.
With version 7.8, ghc supports overloading list literals, compare the manual. For example, given appropriate IsList instances, you can write
['0' .. '9'] :: Set Char
[1 .. 10] :: Vector Int
[("default",0), (k1,v1)] :: Map String Int
['a' .. 'z'] :: Text
(quoted from the documentation).
I am pretty sure this won't be an answer to your question, but still.
I wish Haskell had more liberal function names(mixfix!) a la Agda. Then, the syntax for list constructors (:,[]) wouldn't have been magic; allowing us to at least hide the list type and use the same tokens for our own types.
The amount of code change while migrating between list and custom sequence types would be minimal then.
About map, you are a bit luckier. You can always hide map, and set it equal to fmap yourself.
import Prelude hiding(map)
map :: (Functor f) => (a -> b) -> f a -> f b
map = fmap
Prelude is great, but it isn't the best part of Haskell.

What are the most interesting equivalences arising from the Curry-Howard Isomorphism?

I came upon the Curry-Howard Isomorphism relatively late in my programming life, and perhaps this contributes to my being utterly fascinated by it. It implies that for every programming concept there exists a precise analogue in formal logic, and vice versa. Here's a "basic" list of such analogies, off the top of my head:
program/definition | proof
type/declaration | proposition
inhabited type | theorem/lemma
function | implication
function argument | hypothesis/antecedent
function result | conclusion/consequent
function application | modus ponens
recursion | induction
identity function | tautology
non-terminating function | absurdity/contradiction
tuple | conjunction (and)
disjoint union | disjunction (or) -- corrected by Antal S-Z
parametric polymorphism | universal quantification
So, to my question: what are some of the more interesting/obscure implications of this isomorphism? I'm no logician so I'm sure I've only scratched the surface with this list.
For example, here are some programming notions for which I'm unaware of pithy names in logic:
currying | "((a & b) => c) iff (a => (b => c))"
scope | "known theory + hypotheses"
And here are some logical concepts which I haven't quite pinned down in programming terms:
primitive type? | axiom
set of valid programs? | theory
Edit:
Here are some more equivalences collected from the responses:
function composition | syllogism -- from Apocalisp
continuation-passing | double negation -- from camccann
Since you explicitly asked for the most interesting and obscure ones:
You can extend C-H to many interesting logics and formulations of logics to obtain a really wide variety of correspondences. Here I've tried to focus on some of the more interesting ones rather than on the obscure, plus a couple of fundamental ones that haven't come up yet.
evaluation | proof normalisation/cut-elimination
variable | assumption
S K combinators | axiomatic formulation of logic
pattern matching | left-sequent rules
subtyping | implicit entailment (not reflected in expressions)
intersection types | implicit conjunction
union types | implicit disjunction
open code | temporal next
closed code | necessity
effects | possibility
reachable state | possible world
monadic metalanguage | lax logic
non-termination | truth in an unobservable possible world
distributed programs | modal logic S5/Hybrid logic
meta variables | modal assumptions
explicit substitutions | contextual modal necessity
pi-calculus | linear logic
EDIT: A reference I'd recommend to anyone interested in learning more about extensions of C-H:
"A Judgmental Reconstruction of Modal Logic" http://www.cs.cmu.edu/~fp/papers/mscs00.pdf - this is a great place to start because it starts from first principles and much of it is aimed to be accessible to non-logicians/language theorists. (I'm the second author though, so I'm biased.)
You're muddying things a little bit regarding nontermination. Falsity is represented by uninhabited types, which by definition can't be non-terminating because there's nothing of that type to evaluate in the first place.
Non-termination represents contradiction--an inconsistent logic. An inconsistent logic will of course allow you to prove anything, including falsity, however.
Ignoring inconsistencies, type systems typically correspond to an intuitionistic logic, and are by necessity constructivist, which means certain pieces of classical logic can't be expressed directly, if at all. On the other hand this is useful, because if a type is a valid constructive proof, then a term of that type is a means of constructing whatever you've proven the existence of.
A major feature of the constructivist flavor is that double negation is not equivalent to non-negation. In fact, negation is rarely a primitive in a type system, so instead we can represent it as implying falsehood, e.g., not P becomes P -> Falsity. Double negation would thus be a function with type (P -> Falsity) -> Falsity, which clearly is not equivalent to something of just type P.
However, there's an interesting twist on this! In a language with parametric polymorphism, type variables range over all possible types, including uninhabited ones, so a fully polymorphic type such as ∀a. a is, in some sense, almost-false. So what if we write double almost-negation by using polymorphism? We get a type that looks like this: ∀a. (P -> a) -> a. Is that equivalent to something of type P? Indeed it is, merely apply it to the identity function.
But what's the point? Why write a type like that? Does it mean anything in programming terms? Well, you can think of it as a function that already has something of type P somewhere, and needs you to give it a function that takes P as an argument, with the whole thing being polymorphic in the final result type. In a sense, it represents a suspended computation, waiting for the rest to be provided. In this sense, these suspended computations can be composed together, passed around, invoked, whatever. This should begin to sound familiar to fans of some languages, like Scheme or Ruby--because what it means is that double-negation corresponds to continuation-passing style, and in fact the type I gave above is exactly the continuation monad in Haskell.
Your chart is not quite right; in many cases you have confused types with terms.
function type implication
function proof of implication
function argument proof of hypothesis
function result proof of conclusion
function application RULE modus ponens
recursion n/a [1]
structural induction fold (foldr for lists)
mathematical induction fold for naturals (data N = Z | S N)
identity function proof of A -> A, for all A
non-terminating function n/a [2]
tuple normal proof of conjunction
sum disjunction
n/a [3] first-order universal quantification
parametric polymorphism second-order universal quantification
currying (A,B) -> C -||- A -> (B -> C), for all A,B,C
primitive type axiom
types of typeable terms theory
function composition syllogism
substitution cut rule
value normal proof
[1] The logic for a Turing-complete functional language is inconsistent. Recursion has no correspondence in consistent theories. In an inconsistent logic/unsound proof theory you could call it a rule which causes inconsistency/unsoundness.
[2] Again, this is a consequence of completeness. This would be a proof of an anti-theorem if the logic were consistent -- thus, it can't exist.
[3] Doesn't exist in functional languages, since they elide first-order logical features: all quantification and parametrization is done over formulae. If you had first-order features, there would be a kind other than *, * -> *, etc.; the kind of elements of the domain of discourse. For example, in Father(X,Y) :- Parent(X,Y), Male(X), X and Y range over the domain of discourse (call it Dom), and Male :: Dom -> *.
function composition | syllogism
I really like this question. I don't know a whole lot, but I do have a few things (assisted by the Wikipedia article, which has some neat tables and such itself):
I think that sum types/union types (e.g. data Either a b = Left a | Right b) are equivalent to inclusive disjunction. And, though I'm not very well acquainted with Curry-Howard, I think this demonstrates it. Consider the following function:
andImpliesOr :: (a,b) -> Either a b
andImpliesOr (a,_) = Left a
If I understand things correctly, the type says that (a ∧ b) → (a ★ b) and the definition says that this is true, where ★ is either inclusive or exclusive or, whichever Either represents. You have Either representing exclusive or, ⊕; however, (a ∧ b) ↛ (a ⊕ b). For instance, ⊤ ∧ ⊤ ≡ ⊤, but ⊤ ⊕ ⊥ ≡ ⊥, and ⊤ ↛ ⊥. In other words, if both a and b are true, then the hypothesis is true but the conclusion is false, and so this implication must be false. However, clearly, (a ∧ b) → (a ∨ b), since if both a and b are true, then at least one is true. Thus, if discriminated unions are some form of disjunction, they must be the inclusive variety. I think this holds as a proof, but feel more than free to disabuse me of this notion.
Similarly, your definitions for tautology and absurdity as the identity function and non-terminating functions, respectively, are a bit off. The true formula is represented by the unit type, which is the type which has only one element (data ⊤ = ⊤; often spelled () and/or Unit in functional programming languages). This makes sense: since that type is guaranteed to be inhabited, and since there's only one possible inhabitant, it must be true. The identity function just represents the particular tautology that a → a.
Your comment about non-terminating functions is, depending on what precisely you meant, more off. Curry-Howard functions on the type system, but non-termination is not encoded there. According to Wikipedia, dealing with non-termination is an issue, as adding it produces inconsistent logics (e.g., I can define wrong :: a -> b by wrong x = wrong x, and thus “prove” that a → b for any a and b). If this is what you meant by “absurdity”, then you're exactly correct. If instead you meant the false statement, then what you want instead is any uninhabited type, e.g. something defined by data ⊥—that is, a data type without any way to construct it. This ensures that it has no values at all, and so it must be uninhabited, which is equivalent to false. I think you could probably also use a -> b, since if we forbid non-terminating functions, then this is also uninhabited, but I'm not 100% sure.
Wikipedia says that axioms are encoded in two different ways, depending on how you interpret Curry-Howard: either in the combinators or in the variables. I think the combinator view means that the primitive functions we are given encode the things we can say by default (similar to the way that modus ponens is an axiom because function application is primitive). And I think that the variable view may actually mean the same thing—combinators, after all, are just global variables which are particular functions. As for primitive types: if I'm thinking about this correctly, then I think that primitive types are the entities—the primitive objects that we're trying to prove things about.
According to my logic and semantics class, the fact that (a ∧ b) → c ≡ a → (b → c) (and also that b → (a → c)) is called the exportation equivalence law, at least in natural deduction proofs. I didn't notice at the time that it was just currying—I wish I had, because that's cool!
While we now have a way to represent inclusive disjunction, we don't have a way to represent the exclusive variety. We should be able to use the definition of exclusive disjunction to represent it: a ⊕ b ≡ (a ∨ b) ∧ ¬(a ∧ b). I don't know how to write negation, but I do know that ¬p ≡ p → ⊥, and both implication and falsehood are easy. We should thus able to represent exclusive disjunction by:
data ⊥
data Xor a b = Xor (Either a b) ((a,b) -> ⊥)
This defines ⊥ to be the empty type with no values, which corresponds to falsity; Xor is then defined to contain both (and) Either an a or a b (or) and a function (implication) from (a,b) (and) to the bottom type (false). However, I have no idea what this means. (Edit 1: Now I do, see the next paragraph!) Since there are no values of type (a,b) -> ⊥ (are there?), I can't fathom what this would mean in a program. Does anyone know a better way to think about either this definition or another one? (Edit 1: Yes, camccann.)
Edit 1: Thanks to camccann's answer (more particularly, the comments he left on it to help me out), I think I see what's going on here. To construct a value of type Xor a b, you need to provide two things. First, a witness to the existence of an element of either a or b as the first argument; that is, a Left a or a Right b. And second, a proof that there are not elements of both types a and b—in other words, a proof that (a,b) is uninhabited—as the second argument. Since you'll only be able to write a function from (a,b) -> ⊥ if (a,b) is uninhabited, what does it mean for that to be the case? That would mean that some part of an object of type (a,b) could not be constructed; in other words, that at least one, and possibly both, of a and b are uninhabited as well! In this case, if we're thinking about pattern matching, you couldn't possibly pattern-match on such a tuple: supposing that b is uninhabited, what would we write that could match the second part of that tuple? Thus, we cannot pattern match against it, which may help you see why this makes it uninhabited. Now, the only way to have a total function which takes no arguments (as this one must, since (a,b) is uninhabited) is for the result to be of an uninhabited type too—if we're thinking about this from a pattern-matching perspective, this means that even though the function has no cases, there's no possible body it could have either, and so everything's OK.
A lot of this is me thinking aloud/proving (hopefully) things on the fly, but I hope it's useful. I really recommend the Wikipedia article; I haven't read through it in any sort of detail, but its tables are a really nice summary, and it's very thorough.
Here's a slightly obscure one that I'm surprised wasn't brought up earlier: "classical" functional reactive programming corresponds to temporal logic.
Of course, unless you're a philosopher, mathematician or obsessive functional programmer, this probably brings up several more questions.
So, first off: what is functional reactive programming? It's a declarative way to work with time-varying values. This is useful for writing things like user interfaces because inputs from the user are values that vary over time. "Classical" FRP has two basic data types: events and behaviors.
Events represent values which only exist at discrete times. Keystrokes are a great example: you can think of the inputs from the keyboard as a character at a given time. Each keypress is then just a pair with the character of the key and the time it was pressed.
Behaviors are values that exist constantly but can be changing continuously. The mouse position is a great example: it is just a behavior of x, y coordinates. After all, the mouse always has a position and, conceptually, this position changes continually as you move the mouse. After all, moving the mouse is a single protracted action, not a bunch of discrete steps.
And what is temporal logic? Appropriately enough, it's a set of logical rules for dealing with propositions quantified over time. Essentially, it extends normal first-order logic with two quantifiers: □ and ◇. The first means "always": read □φ as "φ always holds". The second is "eventually": ◇φ means that "φ will eventually hold". This is a particular kind of modal logic. The following two laws relate the quantifiers:
□φ ⇔ ¬◇¬φ
◇φ ⇔ ¬□¬φ
So □ and ◇ are dual to each other in the same way as ∀ and ∃.
These two quantifiers correspond to the two types in FRP. In particular, □ corresponds to behaviors and ◇ corresponds to events. If we think about how these types are inhabited, this should make sense: a behavior is inhabited at every possible time, while an event only happens once.
Related to the relationship between continuations and double negation, the type of call/cc is Peirce's law http://en.wikipedia.org/wiki/Call-with-current-continuation
C-H is usually stated as correspondence between intuitionistic logic and programs. However if we add the call-with-current-continuation (callCC) operator (whose type corresponds to Peirce's law), we get a correspondence between classical logic and programs with callCC.
2-continuation | Sheffer stoke
n-continuation language | Existential graph
Recursion | Mathematical Induction
One thing that is important, but have not yet being investigated is the relationship of 2-continuation (continuations that takes 2 parameters) and Sheffer stroke. In classic logic, Sheffer stroke can form a complete logic system by itself (plus some non-operator concepts). Which means the familiar and, or, not can be implemented using only the Sheffer stoke or nand.
This is an important fact of its programming type correspondence because it prompts that a single type combinator can be used to form all other types.
The type signature of a 2-continuation is (a,b) -> Void. By this implementation we can define 1-continuation (normal continuations) as (a,a) -> Void, product type as ((a,b)->Void,(a,b)->Void)->Void, sum type as ((a,a)->Void,(b,b)->Void)->Void. This gives us an impressive of its power of expressiveness.
If we dig further, we will find out that Piece's existential graph is equivalent to a language with the only data type is n-continuation, but I didn't see any existing languages is in this form. So inventing one could be interesting, I think.
While it's not a simple isomorphism, this discussion of constructive LEM is a very interesting result. In particular, in the conclusion section, Oleg Kiselyov discusses how the use of monads to get double-negation elimination in a constructive logic is analogous to distinguishing computationally decidable propositions (for which LEM is valid in a constructive setting) from all propositions. The notion that monads capture computational effects is an old one, but this instance of the Curry--Howard isomorphism helps put it in perspective and helps get at what double-negation really "means".
First-class continuations support allows you to express $P \lor \neg P$.
The trick is based on the fact that not calling the continuation and exiting with some expression is equivalent to calling the continuation with that same expression.
For more detailed view please see: http://www.cs.cmu.edu/~rwh/courses/logic/www-old/handouts/callcc.pdf

Resources