Associativity of word_cat from Word.thy - isabelle

I am having trouble proving that the word_cat function from Word.thy is associative. This fact seems to be missing from the Word theory itself (or at least find_theorems and a manual browse of the theory reveals nothing relevant), but I require this lemma to proceed in the proof of another theorem.
More specifically, for the following lemma:
lemma word_cat_assoc:
fixes b1 :: "'a::len word" and b2 :: "'b::len word" and b3 :: "'c::len word"
shows "word_cat b1 (word_cat b2 b3) = word_cat (word_cat b1 b2) b3"
sorry
I'm not even sure how best to proceed, here. I have used find_theorems to identify that the lemmas word_eq_iff and word_cat_bl may be of interest, but any attempt to proceed with these lemmas creates a massive mess. Does anybody have any hints?
More generally, it seems to me that working with the Word library itself is quite awkward, and I would appreciate any tips for working with it. I have at several points in my proofs required a case analysis on the result of a word_split w for some w. Using case_tac causes problems here as new type variables are invented for the word length type variables. Instead, I have to resort to a roundabout form of case analysis, first introducing a cut with subgoal_tac with explicit lengths, like so:
apply(subgoal_tac "∃b3::8 word. ∃b4::8 word. word_split b1 = (b3, b4)")
and then proceeding using this fact. Presumably there is a better way to work with the library than this?

Related

How to have full control over substitution in Isabelle

In Isabelle I find myself often using
apply(subst xx)
apply(rule subst)
apply(subst_tac xx)
and similar command but often it is a hit or miss. Is there any resources on how to guide the
term unification and how to precisely specify the terms that should be substituted for?
For example if there are multiple ways to perform unification, how can I disambiguate?
If I have multiple equalities among the premises, how can I tell Isabelle which one of them to use? I spend way too much time wrestling with such seemingly simple problems.
This book https://www21.in.tum.de/~nipkow/LNCS2283/ has a chapter dedicated to substitution but it's far too short, only covers erule ssubst and doesn't really answer my questions.
To give some examples, this is ssubst
lemma ssubst: "t = s ⟹ P s ⟹ P t"
by (drule sym) (erule subst)
but what about
lemma arg_cong: "x = y ⟹ f x = f y"
by (iprover intro: refl elim: subst)
How can I do erule_tac arg_cong and specify exactly the desired f, x and y? Anything I tried resulted in Failed to apply proof method which is not a particularly enlightening error message.
As I recall, a more elaborate substitution method is available, in particular for restricting substitution to certain contexts. But to answer your question properly it's essential to know what sort of assertions you are trying to prove. If you are working with short expressions (up to a couple of lines long), then it's much better to guide the series of transformations using equational reasoning, via also and finally. (See Programming and Proving in Isabelle/HOL, 4.2.2 Chains of (In)Equations.) Then at the cost of writing out these expressions, you'll often find that you can prove each step automatically, without detailed substitutions, and you can follow your reasoning.
Also note that if your expressions are long because they contain large, repeated subexpressions, you can introduce abbreviations using define. You will then not only have shorter and clearer proofs, but you will find that automation will perform much better.
The other situation is when you are working with verification conditions dozens of lines long, or even longer. In that case it is worth looking for more advanced substitution packages.

Accumulating results while using them in Isabelle/Isar

Sometimes in a proof I find myself needing to accumulate results, but also needing use the last result, so I end up using "also then" for that purpose:
proof
have ...
also then have ...
also then have ...
ultimately show ...
qed
I feel like there are more idiomatic ways to this that I don't know about. On the other hand, this might be the standard way to do it and encouraged by the community.
So in light of that, I have two questions:
Is using "also then" discouraged?
If so, what alternatives can I use to accumulate results while using them?
I will start by providing some background. You have breached the subject known as calculational reasoning in Isabelle. Calculation reasoning is described in subsection 1.2 of the document The Isabelle/Isar Reference Manual.
Two of the most common patterns for calculational reasoning are
have "a R b" sorry
also have "b R c" sorry
also have "c R d" sorry
finally have "a R d" by assumption
(where R is a transitive relation, such as =, written using the infix notation) and
have P sorry
moreover have Q sorry
moreover have R sorry
ultimately have S by (rule assms(1))
The commands like also and moreover use an additional fact calculation to store additional information. For example, as the calculation in the first example above proceeds, the fact calculation changes in the following manner
have "a R b" sorry
also have "b R c" (*calculation: a R b*) sorry
also have "c R d" (*calculation: a R c*) sorry
finally have "a R d" by assumption
In this case, the transitivity rule of R is used for chaining the predicates. Thus, the final goal can be discharged by assumption. The situation is different for the moreover ... ultimately pattern:
have P sorry
moreover have Q (*calculation: P*) sorry
moreover have R (*calculation: P, Q*) sorry
ultimately have S (*P ⟹ Q ⟹ R ⟹ S*) by (rule assms(1))
In this case, the fact calculation merely accumulates all previous results.
The implementation of the calculational reasoning is explained in subsection 6.3 of the document The Isabelle/Isar Reference Manual. However, I omit the details in this post.
I will now make an attempt to answer your questions in the context of what was stated above.
Is using "also then" discouraged?
I believe that this is not, necessarily, discouraged and there are some instances of the use of this pattern in the AFP. However, I can imagine that for this specific pattern this would be a reasonably uncommon use case.
If so, what alternatives can I use to accumulate results while using
them?
I believe that if you, indeed, need to merely accumulate results (while, possibly, using them in the intermittent steps), the best pattern to use would be moreover ... ultimately. However, of course, this depends on what exactly is meant by the "accumulation of results".
Remark 1
I hope that from the discussion above it is apparent that the use of also in conjunction with ultimately is very unconventional. In most cases, it makes little sense to use such a pattern.
Remark 2
The pattern also ... finally is often used in conjunction with the abbreviation ...:
have "a R b" sorry
also have "... R c" sorry
also have "... R d" sorry
finally have "a R d" by assumption
Of course, the benefits can only become apparent if b and c are sufficiently long subterms.

How to manage all the various proof methods

Is there a "generic" informal algorithm that users of Isabelle follow, when they are trying to prove something that isn't proved immediately by auto or sledgehammer? A kind of general way of figuring out, if auto needs additional lemmas, formulated by the user, to succeed or if better some other proof method is used.
A related question is: Is there maybe a table to be found somewhere with all the proof methods together with the context in which to apply them? When I'm reading through the Programming and Proving tutorial, the description of various methods (respectively variants of some methods, such as the many variant of auto) are scattered through the text, which constantly makes me go back and for between text and Isabelle code (which also leads to forgetting what exactly is used for what) and which results in a very inefficient workflow.
No, there's no "generic" informal way. You can use try0 which tries all standard proof methods (like auto, blast, fastforce, …) and/or sledgehammer which is more advanced.
After that, the fun part starts.
Can this theorem be shown with simpler helper lemmas? You can use the command "sorry" for assuming that a lemma is true.
How would I prove this on a piece of paper? And then try to do this proof in Isabelle.
Ask for help :) Lots of people on stack overflow, #isabelle on freenode and the Isabelle mailing list are waiting for your questions.
For your second question: No, there's no such overview. Maybe someone should write one, but as mentioned before you can simply use try0.
ammbauer's answer already covers lots of important stuff, but here are some more things that may help you:
When the automation gets stuck at a certain point, look at the available premises and the goal at that point. What kind of simplification did you expect the system to do at that point? Why didn't it do it? Perhaps the corresponding rule is just not in the simp set (add it with simp add:) or some preconditions of the rule could not be proved (in that case, add enough facts so that they can be proved, or do it yourself in an additional step)
Isar proofs are good. If you have some complicated goal, try breaking it down into smaller steps in Isar. If you have bigger auxiliary facts that may even be of more general interest, try pulling them out as auxiliary lemmas. Perhaps you can even generalise them a bit. Sometimes that even simplifies the proof.
In the same vein: Too much information can confuse both you and Isabelle. You can introduce local definitions in Isar with define x where "x = …" and unfold them with x_def. This makes your goals smaller and cleaner and decreases the probability of the automation going down useless paths in its proof search.
Isabelle does not automatically unfold definitions, so if you have a definition, and you want to unfold it for a proof, you have to do that yourself by using unfolding foo_def or simp add: foo_def.
The defining equations of functions defined with fun or primrec are unfolding by anything using the simplifier (simp, simp_all, force, auto) unless the equations (foo.simps) have manually been deleted from the simp set. (by lemmas [simp del] = foo.simps or declare [simp del] foo.simps)
Different proof methods are good at different things, and it takes some experience to know what method to use in what case. As a general rule, anything that requires only rewriting/simplification should be done with simp or simp_all. Anything related to classical reasoning (i.e. first-order logic or sets) calls for blast. If you need both rewriting and classical reasoning, try auto or force. Think of auto as a combination of simp and blast, and force is like an ‘all-or-nothing’ variant of auto that fails if it cannot solve the goal entirely. It also tries a little harder than auto.
Most proof methods can take options. You probably already know add: and del: for simp and simp_all, and the equivalent simp:/simp del: for auto. However, the classical reasoners (auto, blast, force, etc.) also accept intro:, dest:, elim: and the corresponding del: options. These are for declaring introduction, destruction, and elimination rules.
Some more information on the classical reasoner:
An introduction rule is a rule of the form P ⟹ Q ⟹ R that should be used whenever the goal has the form R, to replace it with P and Q
A destruction rule is a rule of the form P ⟹ Q ⟹ R that should be used whenever a fact of the form P is in the premises to replace to goal G with the new goals Q and R ⟹ G.
An elimination rule is something like thm exE (elimination of the existential quantifier). These are like a generalisation of destruction rules that also allow introducing new variables. These rules often appear in this like case distinctions.
The classical reasoner used by auto, blast, force etc. will use the rules in the claset (i.e. that have been declared intro/dest/elim) automatically whenever appropriate. If doing that does not lead to a proof, the automation will backtrack at some point and try other rules. You can disable backtracking for specific rules by using intro!: instead of intro: (and analogously for the others). Then the automation will apply that rule whenever possible without ever looking back.
The basic proof methods rule, drule, erule correspond to applying a single intro/dest/elim rule and are good for single step reasoning, e.g. in order to find out why automatic methods fail to make progress at a certain point. intro is like rule but applies the set of rules it is given iteratively until it is no longer possible.
safe and clarify are occasionally useful. The former essentially strips away quantifiers and logical connectives (try it on a goal like ∀x. P x ∧ Q x ⟶ R x) and the latter similarly tries to ‘clean up’ the goal. (I forgot what it does exactly, I just use it occasionally when I think it might be useful)

Isabelle/HOL foundations

I have seen a lot of documentation about Isabelle's syntax and proof strategies. However, little have I found about its foundations. I have a few questions that I would be very grateful if someone could take the time to answer:
Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
What symbols are part of Isabelle's meta-language? I read that there are symbols in the meta-language for Universal Quantification (/\) and for implication (==>). However, these symbols have their counterpart in the object-level language (∀ and -->). I understand that --> is an object-level function of type bool => bool => bool. However, how are ∀ and ∃ defined? Are they object-level Boolean functions? If so, they are not computable (considering infinite domains). I noticed that I am able to write Boolean functions in therms of ∀ and ∃, but they are not computable. So what are ∀ and ∃? Are they part of the object-level? If so, how are they defined?
Are Isabelle theorems just Boolean expressions? Then Booleans are part of the meta-language?
As far as I know, Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Sorry if these questions are very basic. I do not seem to find a good tutorial on Isabelle's meta-theory. I would love if someone could recommend me a good tutorial on these topics.
Thank you very much.
You can define non-terminating (i.e. partial) functions in Isabelle (cf. Function package manual (section 8)). However, partial functions are more difficult to reason about, because whenever you want to use its definition equations (the psimps rules, which replace the simps rules of a normal function), you have to show that the function terminates on that particular input first.
In general, things like non-definedness and non-termination are always problematic in a logic – consider, for instance, the function ‘definition’ f x = f x + 1. If we were to take this as an equation on ℤ (integers), we could subtract f x from both sides and get 0 = 1. In Haskell, this problem is ‘solved’ by saying that this is not an equation on ℤ, but rather on ℤ ∪ {⊥} (the integers plus bottom) and the non-terminating function f evaluates to ⊥, and ‘⊥ + 1 = ⊥’, so everything works out fine.
However, if every single expression in your logic could potentially evaluate to ⊥ instead of a ‘proper‘ value, reasoning in this logic will become very tedious. This is why Isabelle/HOL chooses to restrict itself to total functions; things like partiality have to be emulated with things like undefined (which is an arbitrary value that you know nothing about) or option types.
I'm not an expert on Isabelle/Pure (the meta logic), but the most important symbols are definitely
⋀ (the universal meta quantifier)
⟹ (meta implication)
≡ (meta equality)
&&& (meta conjunction, defined in terms of ⟹)
Pure.term, Pure.prop, Pure.type, Pure.dummy_pattern, Pure.sort_constraint, which fulfil certain internal functions that I don't know much about.
You can find some information on this in the Isabelle/Isar Reference Manual in section 2.1, and probably more elsewhere in the manual.
Everything else (that includes ∀ and ∃, which indeed operate on boolean expressions) is defined in the object logic (HOL, usually). You can find the definitions, of rather the axiomatisations, in ~~/src/HOL/HOL.thy (where ~~ denotes the Isabelle root directory):
All_def: "All P ≡ (P = (λx. True))"
Ex_def: "Ex P ≡ ∀Q. (∀x. P x ⟶ Q) ⟶ Q"
Also note that many, if not most Isabelle functions are typically not computable. Isabelle is not a programming language, although it does have a code generator that allows exporting Isabelle functions as code to programming languages as long as you can give code equations for all the functions involved.
3)
Isabelle theorems are a complex datatype (cf. ~~/src/Pure/thm.ML) containing a lot of information, but the most important part, of course, is the proposition. A proposition is something from Isabelle/Pure, which in fact only has propositions and functions. (and itself and dummy, but you can ignore those).
Propositions are not booleans – in fact, there isn't even a way to state that a proposition does not hold in Isabelle/Pure.
HOL then defines (or rather axiomatises) booleans and also axiomatises a coercion from booleans to propositions: Trueprop :: bool ⇒ prop
Isabelle is not a programming language, and apart from that, totality does not mean you have to restrict yourself to finite structures. Even in a total programming language, you can have infinite lists. (cf. Idris's codata)
Isabelle is a theorem prover, and logically, infinite objects can be treated by axiomatising them and then reasoning about them using the axioms and rules that you have.
For instance, HOL assumes the existence of an infinite type and defines the natural numbers on that. That already gives you access to functions nat ⇒ 'a, which are essentially infinite lists.
You can also define infinite lists and other infinite data structures as codatatypes with the (co-)datatype package, which is based on bounded natural functors.
Let me add some points to two of your questions.
1) Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
In short: Isabelle/HOL does not require termination, but totality (i.e., there is a specific result for each input to the function) of functions. Totality does not mean that a function is actually terminating when transcribed to a (functional) programming language or even that it is computable at all.
Therefore, talking about termination is somewhat misleading, even though it is encouraged by the fact that Isabelle/HOL's function package uses the keyword termination for proving some property P about which I will have to say a little more below.
On the one hand the term "termination" might sound more intuitive to a wider audience. On the other hand, a more precise description of P would be well-foundedness of the function's call graph.
Don't get me wrong, termination is not really a bad name for the property P, it is even justified by the fact that many techniques that are implemented in the function package are very close to termination techniques from term rewriting or functional programming (like the size-change principle, dependency pairs, lexicographic orders, etc.).
I'm just saying that it can be misleading. The answer to why that is the case also touches on question 4 of the OP.
4) As far as I know Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Isabelle/HOL is not a programming language and it specifically does not have any evaluation strategy (we could alternatively say: it has any evaluation strategy you like).
And here is why the word termination is misleading (drum roll): if there is no evaluation strategy and we have termination of a function f, people might expect f to terminate independent of the used strategy. But this is not the case. A termination proof of a function rather ensures that f is well-defined. Even if f is computable a proof of P merely ensures that there is an evaluation strategy for which f terminates.
(As an aside: what I call "strategy" here, is typically influenced by so called cong-rules (i.e., congruence rules) in Isabelle/HOL.)
As an example, it is trivial to prove that the function (see Section 10.1 Congruence rules and evaluation order in the documentation of the function package):
fun f' :: "nat ⇒ bool"
where
"f' n ⟷ f' (n - 1) ∨ n = 0"
terminates (in the sense defined by termination) after adding the cong-rule:
lemma [fundef_cong]:
"Q = Q' ⟹ (¬ Q' ⟹ P = P') ⟹ (P ∨ Q) = (P' ∨ Q')"
by auto
Which essentially states that logical-or should be "evaluated" from right to left. However, if you write the same function e.g. in OCaml it causes a stack overflow ...
EDIT: this answer is not really correct, check out Lars' comment below.
Unfortunately I don't have enough reputation to post this as a comment, so here is my go at an answer (please bear in mind I am no expert in Isabelle, but I also had similar questions once):
1) The idea is to prove statements about the defined functions. I am not sure how familiar you are with Computability Theory, but think about the Halting Problem and the fact most undeciability problems stem from it (such as Acceptance Problem). Imagine defining a function which you can't prove it terminates. How could you then still prove it returns the number 42 when given input "ABC" and it doesn't go in an infinite loop?
If instead you limit yourself to terminating functions, you can prove much more about them, essentially making a trade-off (or at least this is how I see it).
These ideas stem from Constructivism and Intuitionism and I recommend you check out Robert Harper's very interesting lecture series: https://www.youtube.com/watch?v=9SnefrwBIDc&list=PLGCr8P_YncjXRzdGq2SjKv5F2J8HUFeqN on Type Theory
You should check out especially the part about the absence of the Law of Excluded middle: http://youtu.be/3JHTb6b1to8?t=15m34s
2) See Manuel's answer.
3,4) Again see Manuel's answer keeping in mind Intuitionistic logic: "the fundamental entity is not the boolean, but rather the proof that something is true".
For me it took a long time to get adjusted to this way of thinking and I'm still not sure I understand it. I think the key though is to understand it is a more-or-less completely different way of thinking.

What are the most interesting equivalences arising from the Curry-Howard Isomorphism?

I came upon the Curry-Howard Isomorphism relatively late in my programming life, and perhaps this contributes to my being utterly fascinated by it. It implies that for every programming concept there exists a precise analogue in formal logic, and vice versa. Here's a "basic" list of such analogies, off the top of my head:
program/definition | proof
type/declaration | proposition
inhabited type | theorem/lemma
function | implication
function argument | hypothesis/antecedent
function result | conclusion/consequent
function application | modus ponens
recursion | induction
identity function | tautology
non-terminating function | absurdity/contradiction
tuple | conjunction (and)
disjoint union | disjunction (or) -- corrected by Antal S-Z
parametric polymorphism | universal quantification
So, to my question: what are some of the more interesting/obscure implications of this isomorphism? I'm no logician so I'm sure I've only scratched the surface with this list.
For example, here are some programming notions for which I'm unaware of pithy names in logic:
currying | "((a & b) => c) iff (a => (b => c))"
scope | "known theory + hypotheses"
And here are some logical concepts which I haven't quite pinned down in programming terms:
primitive type? | axiom
set of valid programs? | theory
Edit:
Here are some more equivalences collected from the responses:
function composition | syllogism -- from Apocalisp
continuation-passing | double negation -- from camccann
Since you explicitly asked for the most interesting and obscure ones:
You can extend C-H to many interesting logics and formulations of logics to obtain a really wide variety of correspondences. Here I've tried to focus on some of the more interesting ones rather than on the obscure, plus a couple of fundamental ones that haven't come up yet.
evaluation | proof normalisation/cut-elimination
variable | assumption
S K combinators | axiomatic formulation of logic
pattern matching | left-sequent rules
subtyping | implicit entailment (not reflected in expressions)
intersection types | implicit conjunction
union types | implicit disjunction
open code | temporal next
closed code | necessity
effects | possibility
reachable state | possible world
monadic metalanguage | lax logic
non-termination | truth in an unobservable possible world
distributed programs | modal logic S5/Hybrid logic
meta variables | modal assumptions
explicit substitutions | contextual modal necessity
pi-calculus | linear logic
EDIT: A reference I'd recommend to anyone interested in learning more about extensions of C-H:
"A Judgmental Reconstruction of Modal Logic" http://www.cs.cmu.edu/~fp/papers/mscs00.pdf - this is a great place to start because it starts from first principles and much of it is aimed to be accessible to non-logicians/language theorists. (I'm the second author though, so I'm biased.)
You're muddying things a little bit regarding nontermination. Falsity is represented by uninhabited types, which by definition can't be non-terminating because there's nothing of that type to evaluate in the first place.
Non-termination represents contradiction--an inconsistent logic. An inconsistent logic will of course allow you to prove anything, including falsity, however.
Ignoring inconsistencies, type systems typically correspond to an intuitionistic logic, and are by necessity constructivist, which means certain pieces of classical logic can't be expressed directly, if at all. On the other hand this is useful, because if a type is a valid constructive proof, then a term of that type is a means of constructing whatever you've proven the existence of.
A major feature of the constructivist flavor is that double negation is not equivalent to non-negation. In fact, negation is rarely a primitive in a type system, so instead we can represent it as implying falsehood, e.g., not P becomes P -> Falsity. Double negation would thus be a function with type (P -> Falsity) -> Falsity, which clearly is not equivalent to something of just type P.
However, there's an interesting twist on this! In a language with parametric polymorphism, type variables range over all possible types, including uninhabited ones, so a fully polymorphic type such as ∀a. a is, in some sense, almost-false. So what if we write double almost-negation by using polymorphism? We get a type that looks like this: ∀a. (P -> a) -> a. Is that equivalent to something of type P? Indeed it is, merely apply it to the identity function.
But what's the point? Why write a type like that? Does it mean anything in programming terms? Well, you can think of it as a function that already has something of type P somewhere, and needs you to give it a function that takes P as an argument, with the whole thing being polymorphic in the final result type. In a sense, it represents a suspended computation, waiting for the rest to be provided. In this sense, these suspended computations can be composed together, passed around, invoked, whatever. This should begin to sound familiar to fans of some languages, like Scheme or Ruby--because what it means is that double-negation corresponds to continuation-passing style, and in fact the type I gave above is exactly the continuation monad in Haskell.
Your chart is not quite right; in many cases you have confused types with terms.
function type implication
function proof of implication
function argument proof of hypothesis
function result proof of conclusion
function application RULE modus ponens
recursion n/a [1]
structural induction fold (foldr for lists)
mathematical induction fold for naturals (data N = Z | S N)
identity function proof of A -> A, for all A
non-terminating function n/a [2]
tuple normal proof of conjunction
sum disjunction
n/a [3] first-order universal quantification
parametric polymorphism second-order universal quantification
currying (A,B) -> C -||- A -> (B -> C), for all A,B,C
primitive type axiom
types of typeable terms theory
function composition syllogism
substitution cut rule
value normal proof
[1] The logic for a Turing-complete functional language is inconsistent. Recursion has no correspondence in consistent theories. In an inconsistent logic/unsound proof theory you could call it a rule which causes inconsistency/unsoundness.
[2] Again, this is a consequence of completeness. This would be a proof of an anti-theorem if the logic were consistent -- thus, it can't exist.
[3] Doesn't exist in functional languages, since they elide first-order logical features: all quantification and parametrization is done over formulae. If you had first-order features, there would be a kind other than *, * -> *, etc.; the kind of elements of the domain of discourse. For example, in Father(X,Y) :- Parent(X,Y), Male(X), X and Y range over the domain of discourse (call it Dom), and Male :: Dom -> *.
function composition | syllogism
I really like this question. I don't know a whole lot, but I do have a few things (assisted by the Wikipedia article, which has some neat tables and such itself):
I think that sum types/union types (e.g. data Either a b = Left a | Right b) are equivalent to inclusive disjunction. And, though I'm not very well acquainted with Curry-Howard, I think this demonstrates it. Consider the following function:
andImpliesOr :: (a,b) -> Either a b
andImpliesOr (a,_) = Left a
If I understand things correctly, the type says that (a ∧ b) → (a ★ b) and the definition says that this is true, where ★ is either inclusive or exclusive or, whichever Either represents. You have Either representing exclusive or, ⊕; however, (a ∧ b) ↛ (a ⊕ b). For instance, ⊤ ∧ ⊤ ≡ ⊤, but ⊤ ⊕ ⊥ ≡ ⊥, and ⊤ ↛ ⊥. In other words, if both a and b are true, then the hypothesis is true but the conclusion is false, and so this implication must be false. However, clearly, (a ∧ b) → (a ∨ b), since if both a and b are true, then at least one is true. Thus, if discriminated unions are some form of disjunction, they must be the inclusive variety. I think this holds as a proof, but feel more than free to disabuse me of this notion.
Similarly, your definitions for tautology and absurdity as the identity function and non-terminating functions, respectively, are a bit off. The true formula is represented by the unit type, which is the type which has only one element (data ⊤ = ⊤; often spelled () and/or Unit in functional programming languages). This makes sense: since that type is guaranteed to be inhabited, and since there's only one possible inhabitant, it must be true. The identity function just represents the particular tautology that a → a.
Your comment about non-terminating functions is, depending on what precisely you meant, more off. Curry-Howard functions on the type system, but non-termination is not encoded there. According to Wikipedia, dealing with non-termination is an issue, as adding it produces inconsistent logics (e.g., I can define wrong :: a -> b by wrong x = wrong x, and thus “prove” that a → b for any a and b). If this is what you meant by “absurdity”, then you're exactly correct. If instead you meant the false statement, then what you want instead is any uninhabited type, e.g. something defined by data ⊥—that is, a data type without any way to construct it. This ensures that it has no values at all, and so it must be uninhabited, which is equivalent to false. I think you could probably also use a -> b, since if we forbid non-terminating functions, then this is also uninhabited, but I'm not 100% sure.
Wikipedia says that axioms are encoded in two different ways, depending on how you interpret Curry-Howard: either in the combinators or in the variables. I think the combinator view means that the primitive functions we are given encode the things we can say by default (similar to the way that modus ponens is an axiom because function application is primitive). And I think that the variable view may actually mean the same thing—combinators, after all, are just global variables which are particular functions. As for primitive types: if I'm thinking about this correctly, then I think that primitive types are the entities—the primitive objects that we're trying to prove things about.
According to my logic and semantics class, the fact that (a ∧ b) → c ≡ a → (b → c) (and also that b → (a → c)) is called the exportation equivalence law, at least in natural deduction proofs. I didn't notice at the time that it was just currying—I wish I had, because that's cool!
While we now have a way to represent inclusive disjunction, we don't have a way to represent the exclusive variety. We should be able to use the definition of exclusive disjunction to represent it: a ⊕ b ≡ (a ∨ b) ∧ ¬(a ∧ b). I don't know how to write negation, but I do know that ¬p ≡ p → ⊥, and both implication and falsehood are easy. We should thus able to represent exclusive disjunction by:
data ⊥
data Xor a b = Xor (Either a b) ((a,b) -> ⊥)
This defines ⊥ to be the empty type with no values, which corresponds to falsity; Xor is then defined to contain both (and) Either an a or a b (or) and a function (implication) from (a,b) (and) to the bottom type (false). However, I have no idea what this means. (Edit 1: Now I do, see the next paragraph!) Since there are no values of type (a,b) -> ⊥ (are there?), I can't fathom what this would mean in a program. Does anyone know a better way to think about either this definition or another one? (Edit 1: Yes, camccann.)
Edit 1: Thanks to camccann's answer (more particularly, the comments he left on it to help me out), I think I see what's going on here. To construct a value of type Xor a b, you need to provide two things. First, a witness to the existence of an element of either a or b as the first argument; that is, a Left a or a Right b. And second, a proof that there are not elements of both types a and b—in other words, a proof that (a,b) is uninhabited—as the second argument. Since you'll only be able to write a function from (a,b) -> ⊥ if (a,b) is uninhabited, what does it mean for that to be the case? That would mean that some part of an object of type (a,b) could not be constructed; in other words, that at least one, and possibly both, of a and b are uninhabited as well! In this case, if we're thinking about pattern matching, you couldn't possibly pattern-match on such a tuple: supposing that b is uninhabited, what would we write that could match the second part of that tuple? Thus, we cannot pattern match against it, which may help you see why this makes it uninhabited. Now, the only way to have a total function which takes no arguments (as this one must, since (a,b) is uninhabited) is for the result to be of an uninhabited type too—if we're thinking about this from a pattern-matching perspective, this means that even though the function has no cases, there's no possible body it could have either, and so everything's OK.
A lot of this is me thinking aloud/proving (hopefully) things on the fly, but I hope it's useful. I really recommend the Wikipedia article; I haven't read through it in any sort of detail, but its tables are a really nice summary, and it's very thorough.
Here's a slightly obscure one that I'm surprised wasn't brought up earlier: "classical" functional reactive programming corresponds to temporal logic.
Of course, unless you're a philosopher, mathematician or obsessive functional programmer, this probably brings up several more questions.
So, first off: what is functional reactive programming? It's a declarative way to work with time-varying values. This is useful for writing things like user interfaces because inputs from the user are values that vary over time. "Classical" FRP has two basic data types: events and behaviors.
Events represent values which only exist at discrete times. Keystrokes are a great example: you can think of the inputs from the keyboard as a character at a given time. Each keypress is then just a pair with the character of the key and the time it was pressed.
Behaviors are values that exist constantly but can be changing continuously. The mouse position is a great example: it is just a behavior of x, y coordinates. After all, the mouse always has a position and, conceptually, this position changes continually as you move the mouse. After all, moving the mouse is a single protracted action, not a bunch of discrete steps.
And what is temporal logic? Appropriately enough, it's a set of logical rules for dealing with propositions quantified over time. Essentially, it extends normal first-order logic with two quantifiers: □ and ◇. The first means "always": read □φ as "φ always holds". The second is "eventually": ◇φ means that "φ will eventually hold". This is a particular kind of modal logic. The following two laws relate the quantifiers:
□φ ⇔ ¬◇¬φ
◇φ ⇔ ¬□¬φ
So □ and ◇ are dual to each other in the same way as ∀ and ∃.
These two quantifiers correspond to the two types in FRP. In particular, □ corresponds to behaviors and ◇ corresponds to events. If we think about how these types are inhabited, this should make sense: a behavior is inhabited at every possible time, while an event only happens once.
Related to the relationship between continuations and double negation, the type of call/cc is Peirce's law http://en.wikipedia.org/wiki/Call-with-current-continuation
C-H is usually stated as correspondence between intuitionistic logic and programs. However if we add the call-with-current-continuation (callCC) operator (whose type corresponds to Peirce's law), we get a correspondence between classical logic and programs with callCC.
2-continuation | Sheffer stoke
n-continuation language | Existential graph
Recursion | Mathematical Induction
One thing that is important, but have not yet being investigated is the relationship of 2-continuation (continuations that takes 2 parameters) and Sheffer stroke. In classic logic, Sheffer stroke can form a complete logic system by itself (plus some non-operator concepts). Which means the familiar and, or, not can be implemented using only the Sheffer stoke or nand.
This is an important fact of its programming type correspondence because it prompts that a single type combinator can be used to form all other types.
The type signature of a 2-continuation is (a,b) -> Void. By this implementation we can define 1-continuation (normal continuations) as (a,a) -> Void, product type as ((a,b)->Void,(a,b)->Void)->Void, sum type as ((a,a)->Void,(b,b)->Void)->Void. This gives us an impressive of its power of expressiveness.
If we dig further, we will find out that Piece's existential graph is equivalent to a language with the only data type is n-continuation, but I didn't see any existing languages is in this form. So inventing one could be interesting, I think.
While it's not a simple isomorphism, this discussion of constructive LEM is a very interesting result. In particular, in the conclusion section, Oleg Kiselyov discusses how the use of monads to get double-negation elimination in a constructive logic is analogous to distinguishing computationally decidable propositions (for which LEM is valid in a constructive setting) from all propositions. The notion that monads capture computational effects is an old one, but this instance of the Curry--Howard isomorphism helps put it in perspective and helps get at what double-negation really "means".
First-class continuations support allows you to express $P \lor \neg P$.
The trick is based on the fact that not calling the continuation and exiting with some expression is equivalent to calling the continuation with that same expression.
For more detailed view please see: http://www.cs.cmu.edu/~rwh/courses/logic/www-old/handouts/callcc.pdf

Resources