How to formalize Σ-algebras in Coq? - math

A signature Σ is a set of function symbols where each symbol f is associated to an integer called the arity of f. Intuively, functions represented by a symbol f can only be applied to arity(f) arguments.
Example : Σ = {f/2, g/3}
Let Σ be a signature. A Σ-algebra is made of :
a set A
a mapping between function symbols of Σ to a function (fA : A^n → A) where n is the arity of the symbol f.
My problem is that I have some troubles formalizing these concepts (especially the arity of function symbols). I think I have to use dependent types but I'm not really familiar with them yet.
What I've tried so far :
Definition function_symbol := ascii.
Definition signature : Type := (list function_symbol) * arity.
Inductive sigma_term (sigma:signature) : Type :=
| SigmaVar : variable -> sigma_term sigma
| SigmaFunc f :
let functions := fst sigma in
let arity := snd sigma in
In f functions -> ilist term (arity f) -> sigma_term sigma.
Definition sigma_algebra (sigma:signature) : Type :=
let arity := snd sigma in
{A : Type & forall f:function_symbol, nfun A (arity f) A}.
But it may be a bit over-complicated... I'm open to a better formalization.
References :
ilist (CPDT)
nfun (Coq Standard Library)
Universal Algebra where Σ-algebras are simply called algebras

Related

Which axioms may be safely added to Coq?

This question is a request for references or explanation.
The main idea is: What if I add every axiom from standard library of Coq?
Will it raise a contradiction or they are well-adjusted to each other?
What are other reliable sources of information about Coq other than a standard library of Coq. (I saw a bunch papers from nineties, eighties. Obviously there are plenty of variants of type theories. Which one is for contemporary Coq? Or should I think "Everything that is known may be found in https://coq.inria.fr/refman/ , in https://sympa.inria.fr/sympa/arc/coq-club/1993-12/ and in standard library.")
(A) Do you know paper or other source where it is proved that some axioms may be properly added to Coq?
Properly here means that the extended system will be a conservative extension of previous OR will be considered to be safe strengthening.
(B) Personally, I am interested in these axioms:
0) ex2sig (is it consistent?)
Axiom ex2sig : forall (A:Type) (P:A->Prop), #ex A P -> #sig A P.
1) LEM
2) Functional extensionality
Axiom functional_extensionality_dep : forall {A} {B : A -> Type},
forall (f g : forall x : A, B x),
(forall x, f x = g x) -> f = g.
3) Choice
Theorem choice :
forall (A B : Type) (R : A->B->Prop),
(forall x : A, exists y : B, R x y) ->
exists f : A->B, (forall x : A, R x (f x)).
4) "Terms-as-Types"
Definition E := Type.
Axiom R : forall x : E, x -> E.
Axiom R_inj : forall (x : E) (a b : x), R x a = R x b -> a = b.
5) Proof-Irrelevance
Axiom proof_irrelevance : forall (P:Prop) (p1 p2:P), p1 = p2.
6) ... (you may recommend your axiom in comments)
e.g. Markov's principle
Parameter P:nat -> Prop.
Theorem M:((forall n,(P n \/ ~ (P n)))/\ ~(forall n, ~(P n)) -> exists n,P n).
But we are not very much interested in Markov's principle.
Because we need some very strong classic theory with LEM(so the Markov principle is proved), with some strongest form of Choice(which will imply LEM), extensionality, etc. (Which axioms can we also add?) (By the way, there are many variants of choice in Coq: relational)
p.s. Shall extensive use of "noncomputational" axioms in Coq be treated as misuse of it? (I think no, but I am not sure.)
Which properties of Coq will I loose after adding the axioms? (you may say both reference and/or opinion)
p.p.s. The question is big and consist of many connected pieces, so every partial answer is welcome.

Notational syntax sugar for list and do notation

So I've noticed that in Idris if you define your own list or vector like type — for example the following type I've found to be useful:
data HFVec : (f : Type -> Type) -> (n : Nat) -> Vec n Type -> Type where
Nil : HFVec f Z []
(::) : (a : f t) -> HFVec f n ts -> HFVec f (S n) (t :: ts)
— then you get list syntax for free:
test : HFVec List 2 [Int, String]
test = [[3], [""]]
I assume this is done when you have a constructor named ::, but I don't know for certain. In the same way you get do-notation if you have a constructor named >>= even if there is no monad implementation:
data Test : Type -> Type where
Pure : a -> Test a
(>>=) : Test a -> (a -> Test b) -> Test b
test : Test Int
test = do
Pure 1
x <- Pure 2
Pure x
This is a pretty cool feature, the only thing is I have not found it documented anywhere. It would be good to know exactly how these mechanisms work so one can know under exactly what circumstances they can be expected to work. Also, are these kind of rules the privilege of the compiler, or can the user make them with the syntax and dsl features?

ml function of type fn : 'a -> 'b

The function:
fn : 'a -> 'b
Now, are there any functions which can be defined and have this type?
There are two possible implementations for that function signature in Standard ML. One employs exceptions, the other recursion:
val raises : 'a -> 'b =
fn a => raise Fail "some error";
(* Infinite looping; satisfies the type signature, *)
(* but won't ever produce anything. *)
val rec loops : 'a -> 'b =
fn a => loops a;
The first solution may be useful for defining a helper function, say bug, which saves a few key strokes:
fun bug msg = raise Fail ("BUG: " ^ msg);
The other solution may be useful for defining server loops or REPLs.
In the Basis library, OS.Process.exit is such a function that returns an unknown generic type 'a:
- OS.Process.exit;
val it = fn : OS.Process.status -> 'a
A small echo REPL with type val repl = fn : unit -> 'a:
fun repl () =
let
val line = TextIO.inputLine TextIO.stdIn
in
case line of
NONE => OS.Process.exit OS.Process.failure
| SOME ":q\n" => OS.Process.exit OS.Process.success
| SOME line => (TextIO.print line ; repl ())
end
You might also find useful this question about the type signature of Haskell's forever function.
I can think of one example:
fun f a = raise Div;
I can think of several:
One that is recursive,
fun f x = f x
Any function that raises exceptions,
fun f x = raise SomeExn
Any function that is mutually recursive, e.g.,
fun f x = g x
and g x = f x
Any function that uses casting (requires specific compiler support, below is for Moscow ML),
fun f x = Obj.magic x
Breaking the type system like this is probably cheating, but unlike all the other functions with this type, this function actually returns something. (In the simplest case, it's the identity function.)
A function that throws if the Collatz conjecture is false, recurses infinitely if true,
fun f x =
let fun loop (i : IntInf.int) =
if collatz i
then loop (i+1)
else raise Collatz
in loop 1 end
which is really just a combination of the first two.
Any function that performs arbitrary I/O and recurses infinitely, e.g.
fun f x = (print "Woohoo!"; f x)
fun repl x =
let val y = read ()
val z = eval y
val _ = print z
in repl x end
One may argue that exceptions and infinite recursion represent the same theoretical value ⊥ (bottom) meaning "no result", although since you can catch exceptions and not infinitely recursive functions, you may also argue they're different.
If you restrict yourself to pure functions (e.g. no printing or exceptions) and only Standard ML (and not compiler-specific features) and you think of the mutually recursive cases as functionally equivalent in spite of their different recursion schemes, we're back to just fun f x = f x.
The reason why fun f x = f x has type 'a → 'b is perhaps obvious: The type-inference algorithm assumes that the input type and the output type are 'a and 'b respectively and goes on to conclude the function's only constraint: That f x's input type must be equal to f x's input type, and that f x's output type must be equal to f x's output type, at which point the types 'a and 'b have not been specialized any further.

Proving False with negative inductive types in Coq

The third chapter of CPDT briefly discusses why negative inductive types are forbidden in Coq. If we had
Inductive term : Set :=
| App : term -> term -> term
| Abs : (term -> term) -> term.
then we could easily define a function
Definition uhoh (t : term) : term :=
match t with
| Abs f => f t
| _ => t
end.
so that the term uhoh (Abs uhoh) would be non-terminating, with which "we would be able to prove every theorem".
I understand the non-termination part, but I don't get how we can prove anything with it. How would one prove False using term as defined above?
Reading your question made me realize that I didn't quite understand Adam's argument either. But inconsistency in this case results quite easily from Cantor's usual diagonal argument (a never-ending source of paradoxes and puzzles in logic). Consider the following assumptions:
Section Diag.
Variable T : Type.
Variable test : T -> bool.
Variables x y : T.
Hypothesis xT : test x = true.
Hypothesis yF : test y = false.
Variable g : (T -> T) -> T.
Variable g_inv : T -> (T -> T).
Hypothesis gK : forall f, g_inv (g f) = f.
Definition kaboom (t : T) : T :=
if test (g_inv t t) then y else x.
Lemma kaboom1 : forall t, kaboom t <> g_inv t t.
Proof.
intros t H.
unfold kaboom in H.
destruct (test (g_inv t t)) eqn:E; congruence.
Qed.
Lemma kaboom2 : False.
Proof.
assert (H := #kaboom1 (g kaboom)).
rewrite -> gK in H.
congruence.
Qed.
End Diag.
This is a generic development that could be instantiated with the term type defined in CPDT: T would be term, x and y would be two elements of term that we can test discriminate between (e.g. App (Abs id) (Abs id) and Abs id). The key point is the last assumption: we assume that we have an invertible function g : (T -> T) -> T which, in your example, would be Abs. Using that function, we play the usual diagonalization trick: we define a function kaboom that is by construction different from every function T -> T, including itself. The contradiction results from that.

OCaml: Is there a function with type 'a -> 'a other than the identity function?

This isn't a homework question, by the way. It got brought up in class but my teacher couldn't think of any. Thanks.
How do you define the identity functions ? If you're only considering the syntax, there are different identity functions, which all have the correct type:
let f x = x
let f2 x = (fun y -> y) x
let f3 x = (fun y -> y) (fun y -> y) x
let f4 x = (fun y -> (fun y -> y) y) x
let f5 x = (fun y z -> z) x x
let f6 x = if false then x else x
There are even weirder functions:
let f7 x = if Random.bool() then x else x
let f8 x = if Sys.argv < 5 then x else x
If you restrict yourself to a pure subset of OCaml (which rules out f7 and f8), all the functions you can build verify an observational equation that ensures, in a sense, that what they compute is the identity : for all value f : 'a -> 'a, we have that f x = x
This equation does not depend on the specific function, it is uniquely determined by the type. There are several theorems (framed in different contexts) that formalize the informal idea that "a polymorphic function can't change a parameter of polymorphic type, only pass it around". See for example the paper of Philip Wadler, Theorems for free!.
The nice thing with those theorems is that they don't only apply to the 'a -> 'a case, which is not so interesting. You can get a theorem out of the ('a -> 'a -> bool) -> 'a list -> 'a list type of a sorting function, which says that its application commutes with the mapping of a monotonous function.
More formally, if you have any function s with such a type, then for all types u, v, functions cmp_u : u -> u -> bool, cmp_v : v -> v -> bool, f : u -> v, and list li : u list, and if cmp_u u u' implies cmp_v (f u) (f u') (f is monotonous), you have :
map f (s cmp_u li) = s cmp_v (map f li)
This is indeed true when s is exactly a sorting function, but I find it impressive to be able to prove that it is true of any function s with the same type.
Once you allow non-termination, either by diverging (looping indefinitely, as with the let rec f x = f x function given above), or by raising exceptions, of course you can have anything : you can build a function of type 'a -> 'b, and types don't mean anything anymore. Using Obj.magic : 'a -> 'b has the same effect.
There are saner ways to lose the equivalence to identity : you could work inside a non-empty environment, with predefined values accessible from the function. Consider for example the following function :
let counter = ref 0
let f x = incr counter; x
You still that the property that for all x, f x = x : if you only consider the return value, your function still behaves as the identity. But once you consider side-effects, you're not equivalent to the (side-effect-free) identity anymore : if I know counter, I can write a separating function that returns true when given this function f, and would return false for pure identity functions.
let separate g =
let before = !counter in
g ();
!counter = before + 1
If counter is hidden (for example by a module signature, or simply let f = let counter = ... in fun x -> ...), and no other function can observe it, then we again can't distinguish f and the pure identity functions. So the story is much more subtle in presence of local state.
let rec f x = f (f x)
This function never terminates, but it does have type 'a -> 'a.
If we only allow total functions, the question becomes more interesting. Without using evil tricks, it's not possible to write a total function of type 'a -> 'a, but evil tricks are fun so:
let f (x:'a):'a = Obj.magic 42
Obj.magic is an evil abomination of type 'a -> 'b which allows all kinds of shenanigans to circumvent the type system.
On second thought that one isn't total either because it will crash when used with boxed types.
So the real answer is: the identity function is the only total function of type 'a -> 'a.
Throwing an exception can also give you an 'a -> 'a type:
# let f (x:'a) : 'a = raise (Failure "aaa");;
val f : 'a -> 'a = <fun>
If you restrict yourself to a "reasonable" strongly normalizing typed λ-calculus, there is a single function of type ∀α α→α, which is the identity function. You can prove it by examining the possible normal forms of a term of this type.
Philip Wadler's 1989 article "Theorems for Free" explains how functions having polymorphic types necessarily satisfy certain theorems (e.g. a map-like function commutes with composition).
There are however some nonintuitive issues when one deals with much polymorphism. For instance, there is a standard trick for encoding inductive types and recursion with impredicative polymorphism, by representing an inductive object (e.g. a list) using its recursor function. In some cases, there are terms belonging to the type of the recursor function that are not recursor functions; there is an example in §4.3.1 of Christine Paulin's PhD thesis.

Resources