Proving False with negative inductive types in Coq - infinite-loop

The third chapter of CPDT briefly discusses why negative inductive types are forbidden in Coq. If we had
Inductive term : Set :=
| App : term -> term -> term
| Abs : (term -> term) -> term.
then we could easily define a function
Definition uhoh (t : term) : term :=
match t with
| Abs f => f t
| _ => t
end.
so that the term uhoh (Abs uhoh) would be non-terminating, with which "we would be able to prove every theorem".
I understand the non-termination part, but I don't get how we can prove anything with it. How would one prove False using term as defined above?

Reading your question made me realize that I didn't quite understand Adam's argument either. But inconsistency in this case results quite easily from Cantor's usual diagonal argument (a never-ending source of paradoxes and puzzles in logic). Consider the following assumptions:
Section Diag.
Variable T : Type.
Variable test : T -> bool.
Variables x y : T.
Hypothesis xT : test x = true.
Hypothesis yF : test y = false.
Variable g : (T -> T) -> T.
Variable g_inv : T -> (T -> T).
Hypothesis gK : forall f, g_inv (g f) = f.
Definition kaboom (t : T) : T :=
if test (g_inv t t) then y else x.
Lemma kaboom1 : forall t, kaboom t <> g_inv t t.
Proof.
intros t H.
unfold kaboom in H.
destruct (test (g_inv t t)) eqn:E; congruence.
Qed.
Lemma kaboom2 : False.
Proof.
assert (H := #kaboom1 (g kaboom)).
rewrite -> gK in H.
congruence.
Qed.
End Diag.
This is a generic development that could be instantiated with the term type defined in CPDT: T would be term, x and y would be two elements of term that we can test discriminate between (e.g. App (Abs id) (Abs id) and Abs id). The key point is the last assumption: we assume that we have an invertible function g : (T -> T) -> T which, in your example, would be Abs. Using that function, we play the usual diagonalization trick: we define a function kaboom that is by construction different from every function T -> T, including itself. The contradiction results from that.

Related

Which axioms may be safely added to Coq?

This question is a request for references or explanation.
The main idea is: What if I add every axiom from standard library of Coq?
Will it raise a contradiction or they are well-adjusted to each other?
What are other reliable sources of information about Coq other than a standard library of Coq. (I saw a bunch papers from nineties, eighties. Obviously there are plenty of variants of type theories. Which one is for contemporary Coq? Or should I think "Everything that is known may be found in https://coq.inria.fr/refman/ , in https://sympa.inria.fr/sympa/arc/coq-club/1993-12/ and in standard library.")
(A) Do you know paper or other source where it is proved that some axioms may be properly added to Coq?
Properly here means that the extended system will be a conservative extension of previous OR will be considered to be safe strengthening.
(B) Personally, I am interested in these axioms:
0) ex2sig (is it consistent?)
Axiom ex2sig : forall (A:Type) (P:A->Prop), #ex A P -> #sig A P.
1) LEM
2) Functional extensionality
Axiom functional_extensionality_dep : forall {A} {B : A -> Type},
forall (f g : forall x : A, B x),
(forall x, f x = g x) -> f = g.
3) Choice
Theorem choice :
forall (A B : Type) (R : A->B->Prop),
(forall x : A, exists y : B, R x y) ->
exists f : A->B, (forall x : A, R x (f x)).
4) "Terms-as-Types"
Definition E := Type.
Axiom R : forall x : E, x -> E.
Axiom R_inj : forall (x : E) (a b : x), R x a = R x b -> a = b.
5) Proof-Irrelevance
Axiom proof_irrelevance : forall (P:Prop) (p1 p2:P), p1 = p2.
6) ... (you may recommend your axiom in comments)
e.g. Markov's principle
Parameter P:nat -> Prop.
Theorem M:((forall n,(P n \/ ~ (P n)))/\ ~(forall n, ~(P n)) -> exists n,P n).
But we are not very much interested in Markov's principle.
Because we need some very strong classic theory with LEM(so the Markov principle is proved), with some strongest form of Choice(which will imply LEM), extensionality, etc. (Which axioms can we also add?) (By the way, there are many variants of choice in Coq: relational)
p.s. Shall extensive use of "noncomputational" axioms in Coq be treated as misuse of it? (I think no, but I am not sure.)
Which properties of Coq will I loose after adding the axioms? (you may say both reference and/or opinion)
p.p.s. The question is big and consist of many connected pieces, so every partial answer is welcome.

How to formalize Σ-algebras in Coq?

A signature Σ is a set of function symbols where each symbol f is associated to an integer called the arity of f. Intuively, functions represented by a symbol f can only be applied to arity(f) arguments.
Example : Σ = {f/2, g/3}
Let Σ be a signature. A Σ-algebra is made of :
a set A
a mapping between function symbols of Σ to a function (fA : A^n → A) where n is the arity of the symbol f.
My problem is that I have some troubles formalizing these concepts (especially the arity of function symbols). I think I have to use dependent types but I'm not really familiar with them yet.
What I've tried so far :
Definition function_symbol := ascii.
Definition signature : Type := (list function_symbol) * arity.
Inductive sigma_term (sigma:signature) : Type :=
| SigmaVar : variable -> sigma_term sigma
| SigmaFunc f :
let functions := fst sigma in
let arity := snd sigma in
In f functions -> ilist term (arity f) -> sigma_term sigma.
Definition sigma_algebra (sigma:signature) : Type :=
let arity := snd sigma in
{A : Type & forall f:function_symbol, nfun A (arity f) A}.
But it may be a bit over-complicated... I'm open to a better formalization.
References :
ilist (CPDT)
nfun (Coq Standard Library)
Universal Algebra where Σ-algebras are simply called algebras

Are constructors in the plain calculus of constructions disjoint and injective?

Based on this answer, it looks like the calculus of inductive constructions, as used in Coq, has disjoint, injective constructors for inductive types.
In the plain calculus of constructions (i.e., without primitive inductive types), which uses impredicative encodings for types (e.g., ∏(Nat: *).∏(Succ: Nat → Nat).∏(Zero: Nat).Nat), is this still true? Can I always find out which "constructor" was used? Also, is injectivity (as in ∀a b.I a = I b → a = b) provable in Coq with Prop or impredicative Set?
This seems to cause trouble in Idris.
(I am not sure about all the points that you asked, so I am making this answer a community wiki, so that others can add to it.)
Just for completeness, let's use an impredicative encoding of the Booleans as an example. I also included the encodings of some basic connectives.
Definition bool : Prop := forall (A : Prop), A -> A -> A.
Definition false : bool := fun A _ Hf => Hf.
Definition true : bool := fun A Ht _ => Ht.
Definition eq (n m : bool) : Prop :=
forall (P : bool -> Prop), P n -> P m.
Definition False : Prop := forall (A : Prop), A.
We cannot prove that true and false are disjoint in CoC; that is, the following statement is not provable:
eq false true -> False.
This is because, if this statement were provable in CoC, we would be able to prove true <> false in Coq, and this would contradict proof irrelevance, which is a valid axiom to add. Here is a proof:
Section injectivity_is_not_provable.
Variable Hneq : eq false true -> False. (* suppose it's provable in CoC *)
Lemma injectivity : false <> true.
Proof.
intros Heq.
rewrite Heq in Hneq.
now apply (Hneq (fun P x => x)).
Qed.
Require Import Coq.Logic.ProofIrrelevance.
Fact contradiction : Logic.False.
Proof.
pose proof (proof_irrelevance bool false true) as H.
apply (injectivity H).
Qed.
End injectivity_is_not_provable.

Check if a tree is a BST using a provided higher order function in OCAML

So let me start by saying this was part of a past homework I couldn't solve but as I am preparing for a test I would like to know how to do this. I have these implementations of map_tree and fold_tree provided by the instructor:
let rec map_tree (f:'a -> 'b) (t:'a tree) : 'b tree =
match t with
| Leaf x -> Leaf (f x)
| Node (x,lt,rt) -> Node (f x,(map_tree f lt),(map_tree f rt))
let fold_tree (f1:'a->'b) (f2:'a->'b->'b->'b) (t:'a tree) : 'b =
let rec aux t =
match t with
| Leaf x -> f1 x
| Node (x,lt,rt) -> f2 x (aux lt) (aux rt)
in aux t
I need to implement a function that verifies a tree is a BST using the above functions, so far this is what I've accomplished and I'm getting the error:
Error: This expression has type bool but an expression was expected of type
'a tree
This is my code:
let rec smaller_than t n : bool =
begin match t with
| Leaf x -> true
| Node(x,lt,rt) -> (x<n) && (smaller_than lt x) && (smaller_than rt x)
end
let rec greater_equal_than t n : bool =
begin match t with
| Leaf x -> true
| Node(x,lt,rt) -> (x>=n) && (greater_equal_than lt x) && (greater_equal_than rt x)
end
let check_bst t =
fold_tree (fun x -> true) (fun x lt rt -> ((check_bst lt) && (check_bst rt)&&(smaller_than lt x)&&(greater_equal_than rt x))) t;;
Any suggestions? I seem to have trouble understanding exactly how higher order functions work in OCAML
What is the specification of a BST? It's a binary tree where:
all the elements in the left subtree (which is also a BST) are strictly smaller than the value stored at the node
and all the ones in the right subtree (which is also a BST) are bigger or equal than the value stored at the node
A fold is an induction principle: you have to explain how to deal with the base cases (the Leaf case here) and how to combine the results for the subcases in the step cases (the Node case here).
A Leaf is always a BST so the base case is going to be pretty simple. However, in the Node case, we need to make sure that the values are in the right subtrees. To be able to perform this check, we are going to need extra information. The idea is to have a fold computing:
whether the given tree is a BST
and the interval in which all of its values live
Let's introduce type synonyms to structure our thoughts:
type is_bst = bool
type 'a interval = 'a * 'a
As predicted, the base case is easy:
let leaf_bst (a : 'a) : is_bst * 'a interval = (true, (a, a))
In the Node case, we have the value a stored at the node and the results computed recursively for the left (lih as in left induction hypothesis) and right subtrees respectively. The tree thus built is a BST if and only if the two subtrees are (b1 && b2) and their values respect the properties described earlier. The interval in which this new tree's values live is now the larger (lb1, ub2).
let node_bst (a : 'a) (lih : is_bst * 'a interval) (rih : is_bst * 'a interval) =
let (b1, (lb1, ub1)) = lih in
let (b2, (lb2, ub2)) = rih in
(b1 && b2 && ub1 < a && a <= lb2, (lb1, ub2))
Finally, the function checking whether a tree is a BST is defined by projecting out the boolean out of the result of calling fold_tree leaf_bst node_bst on it.
let bst (t : 'a tree) : bool =
fst (fold_tree leaf_bst node_bst t)

OCaml: Is there a function with type 'a -> 'a other than the identity function?

This isn't a homework question, by the way. It got brought up in class but my teacher couldn't think of any. Thanks.
How do you define the identity functions ? If you're only considering the syntax, there are different identity functions, which all have the correct type:
let f x = x
let f2 x = (fun y -> y) x
let f3 x = (fun y -> y) (fun y -> y) x
let f4 x = (fun y -> (fun y -> y) y) x
let f5 x = (fun y z -> z) x x
let f6 x = if false then x else x
There are even weirder functions:
let f7 x = if Random.bool() then x else x
let f8 x = if Sys.argv < 5 then x else x
If you restrict yourself to a pure subset of OCaml (which rules out f7 and f8), all the functions you can build verify an observational equation that ensures, in a sense, that what they compute is the identity : for all value f : 'a -> 'a, we have that f x = x
This equation does not depend on the specific function, it is uniquely determined by the type. There are several theorems (framed in different contexts) that formalize the informal idea that "a polymorphic function can't change a parameter of polymorphic type, only pass it around". See for example the paper of Philip Wadler, Theorems for free!.
The nice thing with those theorems is that they don't only apply to the 'a -> 'a case, which is not so interesting. You can get a theorem out of the ('a -> 'a -> bool) -> 'a list -> 'a list type of a sorting function, which says that its application commutes with the mapping of a monotonous function.
More formally, if you have any function s with such a type, then for all types u, v, functions cmp_u : u -> u -> bool, cmp_v : v -> v -> bool, f : u -> v, and list li : u list, and if cmp_u u u' implies cmp_v (f u) (f u') (f is monotonous), you have :
map f (s cmp_u li) = s cmp_v (map f li)
This is indeed true when s is exactly a sorting function, but I find it impressive to be able to prove that it is true of any function s with the same type.
Once you allow non-termination, either by diverging (looping indefinitely, as with the let rec f x = f x function given above), or by raising exceptions, of course you can have anything : you can build a function of type 'a -> 'b, and types don't mean anything anymore. Using Obj.magic : 'a -> 'b has the same effect.
There are saner ways to lose the equivalence to identity : you could work inside a non-empty environment, with predefined values accessible from the function. Consider for example the following function :
let counter = ref 0
let f x = incr counter; x
You still that the property that for all x, f x = x : if you only consider the return value, your function still behaves as the identity. But once you consider side-effects, you're not equivalent to the (side-effect-free) identity anymore : if I know counter, I can write a separating function that returns true when given this function f, and would return false for pure identity functions.
let separate g =
let before = !counter in
g ();
!counter = before + 1
If counter is hidden (for example by a module signature, or simply let f = let counter = ... in fun x -> ...), and no other function can observe it, then we again can't distinguish f and the pure identity functions. So the story is much more subtle in presence of local state.
let rec f x = f (f x)
This function never terminates, but it does have type 'a -> 'a.
If we only allow total functions, the question becomes more interesting. Without using evil tricks, it's not possible to write a total function of type 'a -> 'a, but evil tricks are fun so:
let f (x:'a):'a = Obj.magic 42
Obj.magic is an evil abomination of type 'a -> 'b which allows all kinds of shenanigans to circumvent the type system.
On second thought that one isn't total either because it will crash when used with boxed types.
So the real answer is: the identity function is the only total function of type 'a -> 'a.
Throwing an exception can also give you an 'a -> 'a type:
# let f (x:'a) : 'a = raise (Failure "aaa");;
val f : 'a -> 'a = <fun>
If you restrict yourself to a "reasonable" strongly normalizing typed λ-calculus, there is a single function of type ∀α α→α, which is the identity function. You can prove it by examining the possible normal forms of a term of this type.
Philip Wadler's 1989 article "Theorems for Free" explains how functions having polymorphic types necessarily satisfy certain theorems (e.g. a map-like function commutes with composition).
There are however some nonintuitive issues when one deals with much polymorphism. For instance, there is a standard trick for encoding inductive types and recursion with impredicative polymorphism, by representing an inductive object (e.g. a list) using its recursor function. In some cases, there are terms belonging to the type of the recursor function that are not recursor functions; there is an example in §4.3.1 of Christine Paulin's PhD thesis.

Resources