In Isabelle, what do the angle brackets and double asterisks mean? - isabelle

I'm trying to understand some Isabelle code, and there is some syntax I don't understand. I haven't seen them in tutorials, including the two bundled with the Isabelle2017 distribution, "Programming and Proving in Isabelle/HOL" and "The Isabelle/Isar Reference Manual". The fact that they're symbols, combined with Isabelle's very small user base, means that the answer is pretty un-google-able.
The first is tall angle brackets ⟨ ⟩, and the second is a double asterisk ** which renders in the output console as ∧* (which is notably different than ASCII ^).
Here's an example;
lemma pre_fifth_pure [simp]:
"triple net failures (a ** b ** c ** d ** ⟨ P ⟩) cod post =
(P ⟶ triple net failures (a ** b ** c ** d) cod post)"
The angle brackets always seem to surround a proposition. The definition of the triple function implies that (a ** b ** c ** d) is of type state_element set ⇒ bool, where a state_element is just a conjunction of a bunch of constructors;
datatype state_element =
StackHeightElm "nat"
| StackElm "nat * w256" ... (* 20 lines like this *)
Are these two pieces of syntax related? How could (a ** b ** c ** d) be a function? Why can it have different numbers of the things delimited by the stars? Is this somehow custom defined syntax? The mysteries abound.

Isabelle comes with rich mechanisms to define own notation. Therefore, it is quite common to run into unfamiliar syntax when one inspects theories written by others.
Using Isabelle/jEdit, you can press Ctrl (Cmd for Mac) and click on syntax and names to jump to their definition sites (cf. 3.5 of the Isabelle/jEdit Manual).
There are sporadic instances where this might not work. Then, you can try typing print_syntax into your theory text, which will output all rules of the current inner syntax configuration (cf. 8.4.4 of Isabelle/Isar Reference Manual). Hopefully, you can at least figure out which theory some notation is from by this. One should assume that benevolent theory authors would refrain from grammar setups that break the hyperlink feature.
For the specific problem here, ** is syntactic sugar for sep_conj in https://github.com/pirapira/eth-isabelle/blob/master/sep_algebra/Separation_Algebra.thy#L108 and ⟨ _ ⟩ is sugar for pure from https://github.com/pirapira/eth-isabelle/blob/master/Hoare/Hoare.thy#L257, as Alex found out.

Related

Left recursion without following string

I have two following defintions of wikipedia:
Left recursion
A grammar is left-recursive if and only if there exists a nonterminal
symbol A that can derive to a sentential form with itself as the
leftmost symbol. Symbolically,
A ⇒ Aα where ⇒ indicates the operation of making one or more substitutions, and α is any sequence of terminal and nonterminal symbols.
and
Direct left recursion
Direct left recursion occurs when the definition can be satisfied with only one substitution. It requires a rule of the form A ⇒ Aα where α is a sequence of nonterminals and terminals,
https://en.wikipedia.org/wiki/Left_recursion
and an example:
S → AB
A → B | ab | abc
B → A | d | cd
It's not much about left recursion itself, but more about α. Is α allowed to be the empty word (as a terminal) or an empty string? In all examples I could found with indirect left recurson there was always an α like
A→Bab or
B→Ab
But I'm not sure how to argue, if A → B and B → A (so without any α to follow). Is it still a left recursion by definition?
α can be any sequence of terminals​ and non-terminals, including the empty sequence.
However, a production of the form A → A is pointless. Worse, it creates ambiguity because it could be applied any number of times in a derivation, so the gramar cannot be parsed deterministically. That is equally true if there is a circular indirect derivation (A ⇒ A in your notation). That is why you rarely find such cases in examples. But they are still certainly left-recursive. (They are also right-recursive.)
Unit productions (A → B where A and B are different) are quite common, and should not be problematic. Obviously they need to be taken into account while analysing the grammar. But I don't think that was your question.

Associativity of word_cat from Word.thy

I am having trouble proving that the word_cat function from Word.thy is associative. This fact seems to be missing from the Word theory itself (or at least find_theorems and a manual browse of the theory reveals nothing relevant), but I require this lemma to proceed in the proof of another theorem.
More specifically, for the following lemma:
lemma word_cat_assoc:
fixes b1 :: "'a::len word" and b2 :: "'b::len word" and b3 :: "'c::len word"
shows "word_cat b1 (word_cat b2 b3) = word_cat (word_cat b1 b2) b3"
sorry
I'm not even sure how best to proceed, here. I have used find_theorems to identify that the lemmas word_eq_iff and word_cat_bl may be of interest, but any attempt to proceed with these lemmas creates a massive mess. Does anybody have any hints?
More generally, it seems to me that working with the Word library itself is quite awkward, and I would appreciate any tips for working with it. I have at several points in my proofs required a case analysis on the result of a word_split w for some w. Using case_tac causes problems here as new type variables are invented for the word length type variables. Instead, I have to resort to a roundabout form of case analysis, first introducing a cut with subgoal_tac with explicit lengths, like so:
apply(subgoal_tac "∃b3::8 word. ∃b4::8 word. word_split b1 = (b3, b4)")
and then proceeding using this fact. Presumably there is a better way to work with the library than this?

Isabelle/HOL foundations

I have seen a lot of documentation about Isabelle's syntax and proof strategies. However, little have I found about its foundations. I have a few questions that I would be very grateful if someone could take the time to answer:
Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
What symbols are part of Isabelle's meta-language? I read that there are symbols in the meta-language for Universal Quantification (/\) and for implication (==>). However, these symbols have their counterpart in the object-level language (∀ and -->). I understand that --> is an object-level function of type bool => bool => bool. However, how are ∀ and ∃ defined? Are they object-level Boolean functions? If so, they are not computable (considering infinite domains). I noticed that I am able to write Boolean functions in therms of ∀ and ∃, but they are not computable. So what are ∀ and ∃? Are they part of the object-level? If so, how are they defined?
Are Isabelle theorems just Boolean expressions? Then Booleans are part of the meta-language?
As far as I know, Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Sorry if these questions are very basic. I do not seem to find a good tutorial on Isabelle's meta-theory. I would love if someone could recommend me a good tutorial on these topics.
Thank you very much.
You can define non-terminating (i.e. partial) functions in Isabelle (cf. Function package manual (section 8)). However, partial functions are more difficult to reason about, because whenever you want to use its definition equations (the psimps rules, which replace the simps rules of a normal function), you have to show that the function terminates on that particular input first.
In general, things like non-definedness and non-termination are always problematic in a logic – consider, for instance, the function ‘definition’ f x = f x + 1. If we were to take this as an equation on ℤ (integers), we could subtract f x from both sides and get 0 = 1. In Haskell, this problem is ‘solved’ by saying that this is not an equation on ℤ, but rather on ℤ ∪ {⊥} (the integers plus bottom) and the non-terminating function f evaluates to ⊥, and ‘⊥ + 1 = ⊥’, so everything works out fine.
However, if every single expression in your logic could potentially evaluate to ⊥ instead of a ‘proper‘ value, reasoning in this logic will become very tedious. This is why Isabelle/HOL chooses to restrict itself to total functions; things like partiality have to be emulated with things like undefined (which is an arbitrary value that you know nothing about) or option types.
I'm not an expert on Isabelle/Pure (the meta logic), but the most important symbols are definitely
⋀ (the universal meta quantifier)
⟹ (meta implication)
≡ (meta equality)
&&& (meta conjunction, defined in terms of ⟹)
Pure.term, Pure.prop, Pure.type, Pure.dummy_pattern, Pure.sort_constraint, which fulfil certain internal functions that I don't know much about.
You can find some information on this in the Isabelle/Isar Reference Manual in section 2.1, and probably more elsewhere in the manual.
Everything else (that includes ∀ and ∃, which indeed operate on boolean expressions) is defined in the object logic (HOL, usually). You can find the definitions, of rather the axiomatisations, in ~~/src/HOL/HOL.thy (where ~~ denotes the Isabelle root directory):
All_def: "All P ≡ (P = (λx. True))"
Ex_def: "Ex P ≡ ∀Q. (∀x. P x ⟶ Q) ⟶ Q"
Also note that many, if not most Isabelle functions are typically not computable. Isabelle is not a programming language, although it does have a code generator that allows exporting Isabelle functions as code to programming languages as long as you can give code equations for all the functions involved.
3)
Isabelle theorems are a complex datatype (cf. ~~/src/Pure/thm.ML) containing a lot of information, but the most important part, of course, is the proposition. A proposition is something from Isabelle/Pure, which in fact only has propositions and functions. (and itself and dummy, but you can ignore those).
Propositions are not booleans – in fact, there isn't even a way to state that a proposition does not hold in Isabelle/Pure.
HOL then defines (or rather axiomatises) booleans and also axiomatises a coercion from booleans to propositions: Trueprop :: bool ⇒ prop
Isabelle is not a programming language, and apart from that, totality does not mean you have to restrict yourself to finite structures. Even in a total programming language, you can have infinite lists. (cf. Idris's codata)
Isabelle is a theorem prover, and logically, infinite objects can be treated by axiomatising them and then reasoning about them using the axioms and rules that you have.
For instance, HOL assumes the existence of an infinite type and defines the natural numbers on that. That already gives you access to functions nat ⇒ 'a, which are essentially infinite lists.
You can also define infinite lists and other infinite data structures as codatatypes with the (co-)datatype package, which is based on bounded natural functors.
Let me add some points to two of your questions.
1) Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
In short: Isabelle/HOL does not require termination, but totality (i.e., there is a specific result for each input to the function) of functions. Totality does not mean that a function is actually terminating when transcribed to a (functional) programming language or even that it is computable at all.
Therefore, talking about termination is somewhat misleading, even though it is encouraged by the fact that Isabelle/HOL's function package uses the keyword termination for proving some property P about which I will have to say a little more below.
On the one hand the term "termination" might sound more intuitive to a wider audience. On the other hand, a more precise description of P would be well-foundedness of the function's call graph.
Don't get me wrong, termination is not really a bad name for the property P, it is even justified by the fact that many techniques that are implemented in the function package are very close to termination techniques from term rewriting or functional programming (like the size-change principle, dependency pairs, lexicographic orders, etc.).
I'm just saying that it can be misleading. The answer to why that is the case also touches on question 4 of the OP.
4) As far as I know Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Isabelle/HOL is not a programming language and it specifically does not have any evaluation strategy (we could alternatively say: it has any evaluation strategy you like).
And here is why the word termination is misleading (drum roll): if there is no evaluation strategy and we have termination of a function f, people might expect f to terminate independent of the used strategy. But this is not the case. A termination proof of a function rather ensures that f is well-defined. Even if f is computable a proof of P merely ensures that there is an evaluation strategy for which f terminates.
(As an aside: what I call "strategy" here, is typically influenced by so called cong-rules (i.e., congruence rules) in Isabelle/HOL.)
As an example, it is trivial to prove that the function (see Section 10.1 Congruence rules and evaluation order in the documentation of the function package):
fun f' :: "nat ⇒ bool"
where
"f' n ⟷ f' (n - 1) ∨ n = 0"
terminates (in the sense defined by termination) after adding the cong-rule:
lemma [fundef_cong]:
"Q = Q' ⟹ (¬ Q' ⟹ P = P') ⟹ (P ∨ Q) = (P' ∨ Q')"
by auto
Which essentially states that logical-or should be "evaluated" from right to left. However, if you write the same function e.g. in OCaml it causes a stack overflow ...
EDIT: this answer is not really correct, check out Lars' comment below.
Unfortunately I don't have enough reputation to post this as a comment, so here is my go at an answer (please bear in mind I am no expert in Isabelle, but I also had similar questions once):
1) The idea is to prove statements about the defined functions. I am not sure how familiar you are with Computability Theory, but think about the Halting Problem and the fact most undeciability problems stem from it (such as Acceptance Problem). Imagine defining a function which you can't prove it terminates. How could you then still prove it returns the number 42 when given input "ABC" and it doesn't go in an infinite loop?
If instead you limit yourself to terminating functions, you can prove much more about them, essentially making a trade-off (or at least this is how I see it).
These ideas stem from Constructivism and Intuitionism and I recommend you check out Robert Harper's very interesting lecture series: https://www.youtube.com/watch?v=9SnefrwBIDc&list=PLGCr8P_YncjXRzdGq2SjKv5F2J8HUFeqN on Type Theory
You should check out especially the part about the absence of the Law of Excluded middle: http://youtu.be/3JHTb6b1to8?t=15m34s
2) See Manuel's answer.
3,4) Again see Manuel's answer keeping in mind Intuitionistic logic: "the fundamental entity is not the boolean, but rather the proof that something is true".
For me it took a long time to get adjusted to this way of thinking and I'm still not sure I understand it. I think the key though is to understand it is a more-or-less completely different way of thinking.

In pure functional languages, is data (strings, ints, floats.. ) also just functions?

I was thinking about pure Object Oriented Languages like Ruby, where everything, including numbers, int, floats, and strings are themselves objects. Is this the same thing with pure functional languages? For example, in Haskell, are Numbers and Strings also functions?
I know Haskell is based on lambda calculus which represents everything, including data and operations, as functions. It would seem logical to me that a "purely functional language" would model everything as a function, as well as keep with the definition that a function most always returns the same output with the same inputs and has no state.
It's okay to think about that theoretically, but...
Just like in Ruby not everything is an object (argument lists, for instance, are not objects), not everything in Haskell is a function.
For more reference, check out this neat post: http://conal.net/blog/posts/everything-is-a-function-in-haskell
#wrhall gives a good answer. However you are somewhat correct that in the pure lambda calculus it is consistent for everything to be a function, and the language is Turing-complete (capable of expressing any pure computation that Haskell, etc. is).
That gives you some very strange things, since the only thing you can do to anything is to apply it to something else. When do you ever get to observe something? You have some value f and want to know something about it, your only choice is to apply it some value x to get f x, which is another function and the only choice is to apply it to another value y, to get f x y and so on.
Often I interpret the pure lambda calculus as talking about transformations on things that are not functions, but only capable of expressing functions itself. That is, I can make a function (with a bit of Haskelly syntax sugar for recursion & let):
purePlus = \zero succ natCase ->
let plus = \m n -> natCase m n (\m' -> plus m' n)
in plus (succ (succ zero)) (succ (succ zero))
Here I have expressed the computation 2+2 without needing to know that there are such things as non-functions. I simply took what I needed as arguments to the function I was defining, and the values of those arguments could be church encodings or they could be "real" numbers (whatever that means) -- my definition does not care.
And you could think the same thing of Haskell. There is no particular reason to think that there are things which are not functions, nor is there a particular reason to think that everything is a function. But Haskell's type system at least prevents you from applying an argument to a number (anybody thinking about fromInteger right now needs to hold their tongue! :-). In the above interpretation, it is because numbers are not necessarily modeled as functions, so you can't necessarily apply arguments to them.
In case it isn't clear by now, this whole answer has been somewhat of a technical/philosophical digression, and the easy answer to your question is "no, not everything is a function in functional languages". Functions are the things you can apply arguments to, that's all.
The "pure" in "pure functional" refers to the "freedom from side effects" kind of purity. It has little relation to the meaning of "pure" being used when people talk about a "pure object-oriented language", which simply means that the language manipulates purely (only) in objects.
The reason is that pure-as-in-only is a reasonable distinction to use to classify object-oriented languages, because there are languages like Java and C++, which clearly have values that don't have all that much in common with objects, and there are also languages like Python and Ruby, for which it can be argued that every value is an object1
Whereas for functional languages, there are no practical languages which are "pure functional" in the sense that every value the language can manipulate is a function. It's certainly possible to program in such a language. The most basic versions of the lambda calculus don't have any notion of things that are not functions, but you can still do arbitrary computation with them by coming up with ways of representing the things you want to compute on as functions.2
But while the simplicity and minimalism of the lambda calculus tends to be great for proving things about programming, actually writing substantial programs in such a "raw" programming language is awkward. The function representation of basic things like numbers also tends to be very inefficient to implement on actual physical machines.
But there is a very important distinction between languages that encourage a functional style but allow untracked side effects anywhere, and ones that actually enforce that your functions are "pure" functions (similar to mathematical functions). Object-oriented programming is very strongly wed to the use of impure computations3, so there are no practical object-oriented programming languages that are pure in this sense.
So the "pure" in "pure functional language" means something very different from the "pure" in "pure object-oriented language".4 In each case the "pure vs not pure" distinction is one that is completely uninteresting applied to the other kind of language, so there's no very strong motive to standardise the use of the term.
1 There are corner cases to pick at in all "pure object-oriented" languages that I know of, but that's not really very interesting. It's clear that the object metaphor goes much further in languages in which 1 is an instance of some class, and that class can be sub-classed, than it does in languages in which 1 is something else than an object.
2 All computation is about representation anyway. Computers don't know anything about numbers or anything else. They just have bit-patterns that we use to represent numbers, and operations on bit-patterns that happen to correspond to operations on numbers (because we designed them so that they would).
3 This isn't fundamental either. You could design a "pure" object-oriented language that was pure in this sense. I tend to write most of my OO code to be pure anyway.
4 If this seems obtuse, you might reflect that the terms "functional", "object", and "language" have vastly different meanings in other contexts also.
A very different angle on this question: all sorts of data in Haskell can be represented as functions, using a technique called Church encodings. This is a form of inversion of control: instead of passing data to functions that consume it, you hide the data inside a set of closures, and to consume it you pass in callbacks describing what to do with this data.
Any program that uses lists, for example, can be translated into a program that uses functions instead of lists:
-- | A list corresponds to a function of this type:
type ChurchList a r = (a -> r -> r) --^ how to handle a cons cell
-> r --^ how to handle the empty list
-> r --^ result of processing the list
listToCPS :: [a] -> ChurchList a r
listToCPS xs = \f z -> foldr f z xs
That function is taking a concrete list as its starting point, but that's not necessary. You can build up ChurchList functions out of just pure functions:
-- | The empty 'ChurchList'.
nil :: ChurchList a r
nil = \f z -> z
-- | Add an element at the front of a 'ChurchList'.
cons :: a -> ChurchList a r -> ChurchList a r
cons x xs = \f z -> f z (xs f z)
foldChurchList :: (a -> r -> r) -> r -> ChurchList a r -> r
foldChurchList f z xs = xs f z
mapChurchList :: (a -> b) -> ChurchList a r -> ChurchList b r
mapChurchList f = foldChurchList step nil
where step x = cons (f x)
filterChurchList :: (a -> Bool) -> ChurchList a r -> ChurchList a r
filterChurchList pred = foldChurchList step nil
where step x xs = if pred x then cons x xs else xs
That last function uses Bool, but of course we can replace Bool with functions as well:
-- | A Bool can be represented as a function that chooses between two
-- given alternatives.
type ChurchBool r = r -> r -> r
true, false :: ChurchBool r
true a _ = a
false _ b = b
filterChurchList' :: (a -> ChurchBool r) -> ChurchList a r -> ChurchList a r
filterChurchList' pred = foldChurchList step nil
where step x xs = pred x (cons x xs) xs
This sort of transformation can be done for basically any type, so in theory, you could get rid of all "value" types in Haskell, and keep only the () type, the (->) and IO type constructors, return and >>= for IO, and a suitable set of IO primitives. This would obviously be hella impractical—and it would perform worse (try writing tailChurchList :: ChurchList a r -> ChurchList a r for a taste).
Is getChar :: IO Char a function or not? Haskell Report doesn't provide us with a definition. But it states that getChar is a function (see here). (Well, at least we can say that it is a function.)
So I think the answer is YES.
I don't think there can be correct definition of "function" except "everything is a function". (What is "correct definition"? Good question...) Consider the next example:
{-# LANGUAGE NoMonomorphismRestriction #-}
import Control.Applicative
f :: Applicative f => f Int
f = pure 1
g1 :: Maybe Int
g1 = f
g2 :: Int -> Int
g2 = f
Is f a function or datatype? It depends.

What are the most interesting equivalences arising from the Curry-Howard Isomorphism?

I came upon the Curry-Howard Isomorphism relatively late in my programming life, and perhaps this contributes to my being utterly fascinated by it. It implies that for every programming concept there exists a precise analogue in formal logic, and vice versa. Here's a "basic" list of such analogies, off the top of my head:
program/definition | proof
type/declaration | proposition
inhabited type | theorem/lemma
function | implication
function argument | hypothesis/antecedent
function result | conclusion/consequent
function application | modus ponens
recursion | induction
identity function | tautology
non-terminating function | absurdity/contradiction
tuple | conjunction (and)
disjoint union | disjunction (or) -- corrected by Antal S-Z
parametric polymorphism | universal quantification
So, to my question: what are some of the more interesting/obscure implications of this isomorphism? I'm no logician so I'm sure I've only scratched the surface with this list.
For example, here are some programming notions for which I'm unaware of pithy names in logic:
currying | "((a & b) => c) iff (a => (b => c))"
scope | "known theory + hypotheses"
And here are some logical concepts which I haven't quite pinned down in programming terms:
primitive type? | axiom
set of valid programs? | theory
Edit:
Here are some more equivalences collected from the responses:
function composition | syllogism -- from Apocalisp
continuation-passing | double negation -- from camccann
Since you explicitly asked for the most interesting and obscure ones:
You can extend C-H to many interesting logics and formulations of logics to obtain a really wide variety of correspondences. Here I've tried to focus on some of the more interesting ones rather than on the obscure, plus a couple of fundamental ones that haven't come up yet.
evaluation | proof normalisation/cut-elimination
variable | assumption
S K combinators | axiomatic formulation of logic
pattern matching | left-sequent rules
subtyping | implicit entailment (not reflected in expressions)
intersection types | implicit conjunction
union types | implicit disjunction
open code | temporal next
closed code | necessity
effects | possibility
reachable state | possible world
monadic metalanguage | lax logic
non-termination | truth in an unobservable possible world
distributed programs | modal logic S5/Hybrid logic
meta variables | modal assumptions
explicit substitutions | contextual modal necessity
pi-calculus | linear logic
EDIT: A reference I'd recommend to anyone interested in learning more about extensions of C-H:
"A Judgmental Reconstruction of Modal Logic" http://www.cs.cmu.edu/~fp/papers/mscs00.pdf - this is a great place to start because it starts from first principles and much of it is aimed to be accessible to non-logicians/language theorists. (I'm the second author though, so I'm biased.)
You're muddying things a little bit regarding nontermination. Falsity is represented by uninhabited types, which by definition can't be non-terminating because there's nothing of that type to evaluate in the first place.
Non-termination represents contradiction--an inconsistent logic. An inconsistent logic will of course allow you to prove anything, including falsity, however.
Ignoring inconsistencies, type systems typically correspond to an intuitionistic logic, and are by necessity constructivist, which means certain pieces of classical logic can't be expressed directly, if at all. On the other hand this is useful, because if a type is a valid constructive proof, then a term of that type is a means of constructing whatever you've proven the existence of.
A major feature of the constructivist flavor is that double negation is not equivalent to non-negation. In fact, negation is rarely a primitive in a type system, so instead we can represent it as implying falsehood, e.g., not P becomes P -> Falsity. Double negation would thus be a function with type (P -> Falsity) -> Falsity, which clearly is not equivalent to something of just type P.
However, there's an interesting twist on this! In a language with parametric polymorphism, type variables range over all possible types, including uninhabited ones, so a fully polymorphic type such as ∀a. a is, in some sense, almost-false. So what if we write double almost-negation by using polymorphism? We get a type that looks like this: ∀a. (P -> a) -> a. Is that equivalent to something of type P? Indeed it is, merely apply it to the identity function.
But what's the point? Why write a type like that? Does it mean anything in programming terms? Well, you can think of it as a function that already has something of type P somewhere, and needs you to give it a function that takes P as an argument, with the whole thing being polymorphic in the final result type. In a sense, it represents a suspended computation, waiting for the rest to be provided. In this sense, these suspended computations can be composed together, passed around, invoked, whatever. This should begin to sound familiar to fans of some languages, like Scheme or Ruby--because what it means is that double-negation corresponds to continuation-passing style, and in fact the type I gave above is exactly the continuation monad in Haskell.
Your chart is not quite right; in many cases you have confused types with terms.
function type implication
function proof of implication
function argument proof of hypothesis
function result proof of conclusion
function application RULE modus ponens
recursion n/a [1]
structural induction fold (foldr for lists)
mathematical induction fold for naturals (data N = Z | S N)
identity function proof of A -> A, for all A
non-terminating function n/a [2]
tuple normal proof of conjunction
sum disjunction
n/a [3] first-order universal quantification
parametric polymorphism second-order universal quantification
currying (A,B) -> C -||- A -> (B -> C), for all A,B,C
primitive type axiom
types of typeable terms theory
function composition syllogism
substitution cut rule
value normal proof
[1] The logic for a Turing-complete functional language is inconsistent. Recursion has no correspondence in consistent theories. In an inconsistent logic/unsound proof theory you could call it a rule which causes inconsistency/unsoundness.
[2] Again, this is a consequence of completeness. This would be a proof of an anti-theorem if the logic were consistent -- thus, it can't exist.
[3] Doesn't exist in functional languages, since they elide first-order logical features: all quantification and parametrization is done over formulae. If you had first-order features, there would be a kind other than *, * -> *, etc.; the kind of elements of the domain of discourse. For example, in Father(X,Y) :- Parent(X,Y), Male(X), X and Y range over the domain of discourse (call it Dom), and Male :: Dom -> *.
function composition | syllogism
I really like this question. I don't know a whole lot, but I do have a few things (assisted by the Wikipedia article, which has some neat tables and such itself):
I think that sum types/union types (e.g. data Either a b = Left a | Right b) are equivalent to inclusive disjunction. And, though I'm not very well acquainted with Curry-Howard, I think this demonstrates it. Consider the following function:
andImpliesOr :: (a,b) -> Either a b
andImpliesOr (a,_) = Left a
If I understand things correctly, the type says that (a ∧ b) → (a ★ b) and the definition says that this is true, where ★ is either inclusive or exclusive or, whichever Either represents. You have Either representing exclusive or, ⊕; however, (a ∧ b) ↛ (a ⊕ b). For instance, ⊤ ∧ ⊤ ≡ ⊤, but ⊤ ⊕ ⊥ ≡ ⊥, and ⊤ ↛ ⊥. In other words, if both a and b are true, then the hypothesis is true but the conclusion is false, and so this implication must be false. However, clearly, (a ∧ b) → (a ∨ b), since if both a and b are true, then at least one is true. Thus, if discriminated unions are some form of disjunction, they must be the inclusive variety. I think this holds as a proof, but feel more than free to disabuse me of this notion.
Similarly, your definitions for tautology and absurdity as the identity function and non-terminating functions, respectively, are a bit off. The true formula is represented by the unit type, which is the type which has only one element (data ⊤ = ⊤; often spelled () and/or Unit in functional programming languages). This makes sense: since that type is guaranteed to be inhabited, and since there's only one possible inhabitant, it must be true. The identity function just represents the particular tautology that a → a.
Your comment about non-terminating functions is, depending on what precisely you meant, more off. Curry-Howard functions on the type system, but non-termination is not encoded there. According to Wikipedia, dealing with non-termination is an issue, as adding it produces inconsistent logics (e.g., I can define wrong :: a -> b by wrong x = wrong x, and thus “prove” that a → b for any a and b). If this is what you meant by “absurdity”, then you're exactly correct. If instead you meant the false statement, then what you want instead is any uninhabited type, e.g. something defined by data ⊥—that is, a data type without any way to construct it. This ensures that it has no values at all, and so it must be uninhabited, which is equivalent to false. I think you could probably also use a -> b, since if we forbid non-terminating functions, then this is also uninhabited, but I'm not 100% sure.
Wikipedia says that axioms are encoded in two different ways, depending on how you interpret Curry-Howard: either in the combinators or in the variables. I think the combinator view means that the primitive functions we are given encode the things we can say by default (similar to the way that modus ponens is an axiom because function application is primitive). And I think that the variable view may actually mean the same thing—combinators, after all, are just global variables which are particular functions. As for primitive types: if I'm thinking about this correctly, then I think that primitive types are the entities—the primitive objects that we're trying to prove things about.
According to my logic and semantics class, the fact that (a ∧ b) → c ≡ a → (b → c) (and also that b → (a → c)) is called the exportation equivalence law, at least in natural deduction proofs. I didn't notice at the time that it was just currying—I wish I had, because that's cool!
While we now have a way to represent inclusive disjunction, we don't have a way to represent the exclusive variety. We should be able to use the definition of exclusive disjunction to represent it: a ⊕ b ≡ (a ∨ b) ∧ ¬(a ∧ b). I don't know how to write negation, but I do know that ¬p ≡ p → ⊥, and both implication and falsehood are easy. We should thus able to represent exclusive disjunction by:
data ⊥
data Xor a b = Xor (Either a b) ((a,b) -> ⊥)
This defines ⊥ to be the empty type with no values, which corresponds to falsity; Xor is then defined to contain both (and) Either an a or a b (or) and a function (implication) from (a,b) (and) to the bottom type (false). However, I have no idea what this means. (Edit 1: Now I do, see the next paragraph!) Since there are no values of type (a,b) -> ⊥ (are there?), I can't fathom what this would mean in a program. Does anyone know a better way to think about either this definition or another one? (Edit 1: Yes, camccann.)
Edit 1: Thanks to camccann's answer (more particularly, the comments he left on it to help me out), I think I see what's going on here. To construct a value of type Xor a b, you need to provide two things. First, a witness to the existence of an element of either a or b as the first argument; that is, a Left a or a Right b. And second, a proof that there are not elements of both types a and b—in other words, a proof that (a,b) is uninhabited—as the second argument. Since you'll only be able to write a function from (a,b) -> ⊥ if (a,b) is uninhabited, what does it mean for that to be the case? That would mean that some part of an object of type (a,b) could not be constructed; in other words, that at least one, and possibly both, of a and b are uninhabited as well! In this case, if we're thinking about pattern matching, you couldn't possibly pattern-match on such a tuple: supposing that b is uninhabited, what would we write that could match the second part of that tuple? Thus, we cannot pattern match against it, which may help you see why this makes it uninhabited. Now, the only way to have a total function which takes no arguments (as this one must, since (a,b) is uninhabited) is for the result to be of an uninhabited type too—if we're thinking about this from a pattern-matching perspective, this means that even though the function has no cases, there's no possible body it could have either, and so everything's OK.
A lot of this is me thinking aloud/proving (hopefully) things on the fly, but I hope it's useful. I really recommend the Wikipedia article; I haven't read through it in any sort of detail, but its tables are a really nice summary, and it's very thorough.
Here's a slightly obscure one that I'm surprised wasn't brought up earlier: "classical" functional reactive programming corresponds to temporal logic.
Of course, unless you're a philosopher, mathematician or obsessive functional programmer, this probably brings up several more questions.
So, first off: what is functional reactive programming? It's a declarative way to work with time-varying values. This is useful for writing things like user interfaces because inputs from the user are values that vary over time. "Classical" FRP has two basic data types: events and behaviors.
Events represent values which only exist at discrete times. Keystrokes are a great example: you can think of the inputs from the keyboard as a character at a given time. Each keypress is then just a pair with the character of the key and the time it was pressed.
Behaviors are values that exist constantly but can be changing continuously. The mouse position is a great example: it is just a behavior of x, y coordinates. After all, the mouse always has a position and, conceptually, this position changes continually as you move the mouse. After all, moving the mouse is a single protracted action, not a bunch of discrete steps.
And what is temporal logic? Appropriately enough, it's a set of logical rules for dealing with propositions quantified over time. Essentially, it extends normal first-order logic with two quantifiers: □ and ◇. The first means "always": read □φ as "φ always holds". The second is "eventually": ◇φ means that "φ will eventually hold". This is a particular kind of modal logic. The following two laws relate the quantifiers:
□φ ⇔ ¬◇¬φ
◇φ ⇔ ¬□¬φ
So □ and ◇ are dual to each other in the same way as ∀ and ∃.
These two quantifiers correspond to the two types in FRP. In particular, □ corresponds to behaviors and ◇ corresponds to events. If we think about how these types are inhabited, this should make sense: a behavior is inhabited at every possible time, while an event only happens once.
Related to the relationship between continuations and double negation, the type of call/cc is Peirce's law http://en.wikipedia.org/wiki/Call-with-current-continuation
C-H is usually stated as correspondence between intuitionistic logic and programs. However if we add the call-with-current-continuation (callCC) operator (whose type corresponds to Peirce's law), we get a correspondence between classical logic and programs with callCC.
2-continuation | Sheffer stoke
n-continuation language | Existential graph
Recursion | Mathematical Induction
One thing that is important, but have not yet being investigated is the relationship of 2-continuation (continuations that takes 2 parameters) and Sheffer stroke. In classic logic, Sheffer stroke can form a complete logic system by itself (plus some non-operator concepts). Which means the familiar and, or, not can be implemented using only the Sheffer stoke or nand.
This is an important fact of its programming type correspondence because it prompts that a single type combinator can be used to form all other types.
The type signature of a 2-continuation is (a,b) -> Void. By this implementation we can define 1-continuation (normal continuations) as (a,a) -> Void, product type as ((a,b)->Void,(a,b)->Void)->Void, sum type as ((a,a)->Void,(b,b)->Void)->Void. This gives us an impressive of its power of expressiveness.
If we dig further, we will find out that Piece's existential graph is equivalent to a language with the only data type is n-continuation, but I didn't see any existing languages is in this form. So inventing one could be interesting, I think.
While it's not a simple isomorphism, this discussion of constructive LEM is a very interesting result. In particular, in the conclusion section, Oleg Kiselyov discusses how the use of monads to get double-negation elimination in a constructive logic is analogous to distinguishing computationally decidable propositions (for which LEM is valid in a constructive setting) from all propositions. The notion that monads capture computational effects is an old one, but this instance of the Curry--Howard isomorphism helps put it in perspective and helps get at what double-negation really "means".
First-class continuations support allows you to express $P \lor \neg P$.
The trick is based on the fact that not calling the continuation and exiting with some expression is equivalent to calling the continuation with that same expression.
For more detailed view please see: http://www.cs.cmu.edu/~rwh/courses/logic/www-old/handouts/callcc.pdf

Resources