Nested recursion and `Program Fixpoint` or `Function` - recursion

I’d like to define the following function using Program Fixpoint or Function in Coq:
Require Import Coq.Lists.List.
Import ListNotations.
Require Import Coq.Program.Wf.
Require Import Recdef.
Inductive Tree := Node : nat -> list Tree -> Tree.
Fixpoint height (t : Tree) : nat :=
match t with
| Node x ts => S (fold_right Nat.max 0 (map height ts))
end.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map (fun t => mapTree f t) ts)
end.
Next Obligation.
Unfortunately, at this point I have a proof obligation height t < height (Node x ts) without knowing that t is a member of ts.
Similarly with Function instead of Program Fixpoint, only that Function detects the problem and aborts the definition:
Error:
the term fun t : Tree => mapTree f t can not contain a recursive call to mapTree
I would expect to get a proof obligation of In t ts → height t < height (Node x ts).
Is there a way of getting that that does not involve restructuring the function definition? (I know work-arounds that require inlining the definition of map here, for example – I’d like to avoid these.)
Isabelle
To justify that expectation, let me show what happens when I do the same in Isabelle, using the function command, which is (AFAIK) related to Coq’s Function command:
theory Tree imports Main begin
datatype Tree = Node nat "Tree list"
fun height where
"height (Node _ ts) = Suc (foldr max (map height ts) 0)"
function mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
by pat_completeness auto
termination
proof (relation "measure (λ(f,t). height t)")
show "wf (measure (λ(f, t). height t))" by auto
next
fix f :: "nat ⇒ nat" and x :: nat and ts :: "Tree list" and t
assume "t ∈ set ts"
thus "((f, t), (f, Node x ts)) ∈ measure (λ(f, t). height t)"
by (induction ts) auto
qed
In the termination proof, I get the assumption t ∈ set ts.
Note that Isabelle does not require a manual termination proof here, and the following definition works just fine:
fun mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
This works because the map function has a “congruence lemma” of the form
xs = ys ⟹ (⋀x. x ∈ set ys ⟹ f x = g x) ⟹ map f xs = map g ys
that the function command uses to find out that the termination proof only needs to consider t ∈ set ts..
If such a lemma is not available, e.g. because I define
definition "map' = map"
and use that in mapTree, I get the same unprovable proof obligation as in Coq. I can make it work again by declaring a congruence lemma for map', e.g. using
declare map_cong[folded map'_def,fundef_cong]

In this case, you actually do not need well-founded recursion in its full generality:
Require Import Coq.Lists.List.
Set Implicit Arguments.
Inductive tree := Node : nat -> list tree -> tree.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
Coq is able to figure out by itself that recursive calls to map_tree are performed on strict subterms. However, proving anything about this function is difficult, as the induction principle generated for tree is not useful:
tree_ind :
forall P : tree -> Prop,
(forall (n : nat) (l : list tree), P (Node n l)) ->
forall t : tree, P t
This is essentially the same problem you described earlier. Luckily, we can fix the issue by proving our own induction principle with a proof term.
Require Import Coq.Lists.List.
Import ListNotations.
Unset Elimination Schemes.
Inductive tree := Node : nat -> list tree -> tree.
Set Elimination Schemes.
Fixpoint tree_ind
(P : tree -> Prop)
(IH : forall (n : nat) (ts : list tree),
fold_right (fun t => and (P t)) True ts ->
P (Node n ts))
(t : tree) : P t :=
match t with
| Node n ts =>
let fix loop ts :=
match ts return fold_right (fun t' => and (P t')) True ts with
| [] => I
| t' :: ts' => conj (tree_ind P IH t') (loop ts')
end in
IH n ts (loop ts)
end.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
The Unset Elimination Schemes command prevents Coq from generating its default (and not useful) induction principle for tree. The occurrence of fold_right on the induction hypothesis simply expresses that the predicate P holds of every tree t' appearing in ts.
Here is a statement that you can prove using this induction principle:
Lemma map_tree_comp f g t :
map_tree f (map_tree g t) = map_tree (fun n => f (g n)) t.
Proof.
induction t as [n ts IH]; simpl; f_equal.
induction ts as [|t' ts' IHts]; try easy.
simpl in *.
destruct IH as [IHt' IHts'].
specialize (IHts IHts').
now rewrite IHt', <- IHts.
Qed.

You can now do this with Equations and get the right elimination principle automatically, using either structural nested recursion or well-founded recursion

In general, it might be advisable to avoid this problem. But if one really wants to obtain the proof obligation that Isabelle gives you, here is a way:
In Isabelle, we can give an external lemma that stats that map applies its arguments only to members of the given list. In Coq, we cannot do this in an external lemma, but we can do it in the type. So instead of the normal type of map
forall A B, (A -> B) -> list A -> list B
we want the type to say “f is only ever applied to elements of the list:
forall A B (xs : list A), (forall x : A, In x xs -> B) -> list B
(It requires reordering the argument so that the type of f can mention xs).
Writing this function is not trivial, and I found it easier to use a proof script:
Definition map {A B} (xs : list A) (f : forall (x:A), In x xs -> B) : list B.
Proof.
induction xs.
* exact [].
* refine (f a _ :: IHxs _).
- left. reflexivity.
- intros. eapply f. right. eassumption.
Defined.
But you can also write it “by hand”:
Fixpoint map {A B} (xs : list A) : forall (f : forall (x:A), In x xs -> B), list B :=
match xs with
| [] => fun _ => []
| x :: xs => fun f => f x (or_introl eq_refl) :: map xs (fun y h => f y (or_intror h))
end.
In either case, the result is nice: I can use this function in mapTree, i.e.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map ts (fun t _ => mapTree f t))
end.
Next Obligation.
and I don’t have to do anything with the new argument to f, but it shows up in the the termination proof obligation, as In t ts → height t < height (Node x ts) as desired. So I can prove that and define mapTree:
simpl.
apply Lt.le_lt_n_Sm.
induction ts; inversion_clear H.
- subst. apply PeanoNat.Nat.le_max_l.
- rewrite IHts by assumption.
apply PeanoNat.Nat.le_max_r.
Qed.
It only works with Program Fixpoint, not with Function, unfortunately.

Related

Reification for interpretation functions which use different interpretation functions below quantifiers/lambdas

I'm currently trying use Isabelle/HOL's reification tactic. I'm unable to use different interpretation functions below quantifiers/lambdas. The below MWE illustrates this. The important part is the definition of the form function, where the ter call occurs below the ∀. When trying to use the reify tactic I get an Cannot find the atoms equation error. I don't get this error for interpretation functions which only call themselves under quantifiers.
I can't really reformulate my problem to avoid this. Does anybody know how to get reify working for such cases?
theory MWE
imports
"HOL-Library.Reflection"
begin
datatype Ter = V nat | P Ter Ter
datatype Form = All0 Ter
fun ter :: "Ter ⇒ nat list ⇒ nat"
where "ter (V n) vs = vs ! n"
| "ter (P t1 t2) vs = ter t1 vs + ter t2 vs"
fun form :: "Form ⇒ nat list ⇒ bool"
where "form (All0 t) vs = (∀ v . ter t (v#vs) = 0)" (* use of different interpretation function below quantifier *)
(*
I would expect this to reify to:
form (All0 (P (V 0) (V 0))) []
instead I get an error :-(
*)
lemma "∀ n :: nat . n + n = 0"
apply (reify ter.simps form.simps)
(* proof (prove)
goal (1 subgoal):
1. ∀n. n + n = n + n
Cannot find the atoms equation *)
oops
(* As a side note: the following example in src/HOL/ex/Reflection_Examples.thy (line 448, Isabelle2022) seems to be broken? For me, the reify invocation
doesn't change the goal at all. It uses quantifiers too, but only calls the same interpretation function under quantifiers and also doesn't throw an error,
so at least for me this seems to be unrelated to my problem.
*)
(*
lemma " ∀x. ∃n. ((Suc n) * length (([(3::int) * x + f t * y - 9 + (- z)] # []) # xs) = length xs) ∧ m < 5*n - length (xs # [2,3,4,x*z + 8 - y]) ⟶ (∃p. ∀q. p ∧ q ⟶ r)"
apply (reify Irifm.simps Irnat_simps Irlist.simps Irint_simps)
oops
*)
end

How to represent kleisli composition of substitutions in abstract trees

Context: I have been trying to implement the unification algorithm (the algorithm to find the most general unifier of two abstract syntax trees). Since a unifier is a substitution, algorithm requires defining composition of substitutions.
To be specific, given a type treeSigma dependent on another type X, a substitution is a function of type:
X -> treeSigma X
and the function substitute takes a substitution as an input and has type
substitute: (X-> (treeSigma X))-> (treeSigma X) -> (treeSigma X)
I need to define a function to compose two substitutions:
compose_kleisli (rho1 rho2: X->(treeSigma X)) : (treeSigma X) := ...
such that,
forall tr: treeSigma X,
substitute (compose_kleisli rho1 rho2) tr = substitute rho1 (substitute rho2 tr).
I am fairly new to coq and have been stuck with defining this composition.
How can I define this composition?
I tried to define it using Record like this:
Record compose {X s} (rho1 rho2: X-> treeSigma X):= mkCompose{
RHO: X-> treeSigma X;
CONDITION: forall t, substitute RHO t = substitute rho2 (substitute rho1 t)
}.
but along with this, I would need to prove the result that the composition can be defined for any two substitutions. Something like:
Theorem composeTotal: forall {X s} (rho1 rho2: X-> treeSigma s X), exists rho3,
forall t, substitute rho3 t = substitute rho2 (substitute rho1 t).
Proving this would require a construction of rho3 which circles back to the same problem of defining compose.
treeSigma is defined as:
(* Signature *)
Record sigma: Type := mkSigma {
symbol : Type;
arity : symbol -> nat
}.
Record sigmaLeaf (s:sigma): Type := mkLeaf {
cLeaf: symbol s;
condLeaf: arity s cLeaf = 0
}.
Record sigmaNode (s:sigma): Type := mkNode {
fNode: symbol s;
condNode: arity s fNode <> 0
}.
(* Sigma Algebra *)
Record sigAlg (s:sigma) (X:Type) := mkAlg {
Carrier: Type;
meaning: forall f:(sigmaNode s), (Vector.t Carrier (arity s (fNode s f))) -> Carrier;
meanLeaf: forall f:(sigmaLeaf s), Vector.t Carrier 0 -> Carrier
}.
(* Abstract tree on arbitrary signature. *)
Inductive treeSigma (s:sigma) (X:Type):=
| VAR (x:X)
| LEAF (c: sigmaLeaf s)
| NODE (f: sigmaNode s) (sub: Vector.t (treeSigma s X) (arity s (fNode s f)) ).
(* Defining abstract syntax as a sigma algebra. *)
Definition meanTreeNode {s X} (f:sigmaNode s) (sub: Vector.t (treeSigma s X) (s.(arity)
(fNode s f))): treeSigma s X:= NODE s X f sub.
Definition meanTreeLeaf {s X} (c:sigmaLeaf s) (sub: Vector.t (treeSigma s X) 0) := LEAF s X c.
Definition treeSigAlg {s X} := mkAlg s X (treeSigma s X) meanTreeNode meanTreeLeaf.
The substitution function is defined as:
Fixpoint homoSigma1 {X:Type} {s} (A: sigAlg s X) (rho: X-> (Carrier s X A))
(wft: (treeSigma s X)) {struct wft}: (Carrier s X A) :=
match wft with
| VAR _ _ x => rho x
| LEAF _ _ c => meanLeaf s X A c []
| NODE _ _ f l2 => meanNode s X A f (
(fix homoSigVec k (l2:Vector.t _ k):= match l2 with
| [] => []
| t::l2s => (homoSigma1 A rho t):: (homoSigVec (vlen _ l2s) l2s)
end)
(arity s (fNode s f)) l2)
end.
Definition substitute {X s} (rho: X-> treeSigma s X) (t: treeSigma s X) := #homoSigma1 X s treeSigAlg rho t.
To be particular, a substitution is the homomorphic extension of rho (which is a variable valuation).
Definitions like this are challenging to work with because the tree type occurs recursively inside of another inductive type. Coq has trouble generating induction principles for these types on its own, so you need to help it a little bit. Here is a possible solution, for a slightly simplified set up:
Require Import Coq.Vectors.Vector.
Import VectorNotations.
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Section Dev.
Variable symbol : Type.
Variable arity : symbol -> nat.
Record alg := Alg {
alg_sort :> Type;
alg_op : forall f : symbol, Vector.t alg_sort (arity f) -> alg_sort;
}.
Arguments alg_op {_} f _.
(* Turn off the automatic generation of induction principles.
This tree type does not distinguish between leaves and nodes,
since they only differ in their arity. *)
Unset Elimination Schemes.
Inductive treeSigma (X:Type) :=
| VAR (x:X)
| NODE (f: symbol) (args : Vector.t (treeSigma X) (arity f)).
Arguments NODE {X} _ _.
Set Elimination Schemes.
(* Manual definition of a custom induction principle for treeSigma.
HNODE is the inductive case for the NODE constructor; the vs argument is
saying that the induction hypothesis holds for each tree in the vector of
arguments. *)
Definition treeSigma_rect (X : Type) (T : treeSigma X -> Type)
(HVAR : forall x, T (VAR x))
(HNODE : forall f (ts : Vector.t (treeSigma X) (arity f))
(vs : Vector.fold_right (fun t V => T t * V)%type ts unit),
T (NODE f ts)) :
forall t, T t :=
fix loopTree (t : treeSigma X) : T t :=
match t with
| VAR x => HVAR x
| NODE f ts =>
let fix loopVector n (ts : Vector.t (treeSigma X) n) :
Vector.fold_right (fun t V => T t * V)%type ts unit :=
match ts with
| [] => tt
| t :: ts => (loopTree t, loopVector _ ts)
end in
HNODE f ts (loopVector (arity f) ts)
end.
Definition treeSigma_ind (X : Type) (T : treeSigma X -> Prop) :=
#treeSigma_rect X T.
Definition treeSigma_alg (X:Type) : alg := {|
alg_sort := treeSigma X;
alg_op := #NODE X;
|}.
Fixpoint homoSigma {X : Type} {Y : alg} (ρ : X -> Y) (t : treeSigma X) : Y :=
match t with
| VAR x => ρ x
| NODE f xs => alg_op f (Vector.map (homoSigma ρ) xs)
end.
Definition substitute X (ρ : X -> treeSigma X) (t : treeSigma X) : treeSigma X :=
#homoSigma X (treeSigma_alg X) ρ t.
(* You can define composition simply by applying using substitution. *)
Definition compose X (ρ1 ρ2 : X -> treeSigma X) : X -> treeSigma X :=
fun x => substitute ρ1 (ρ2 x).
(* The property you are looking for follows by induction on the tree. Note
that this requires a nested induction on the vector of arguments. *)
Theorem composeP X (ρ1 ρ2 : X -> treeSigma X) t :
substitute (compose ρ1 ρ2) t = substitute ρ1 (substitute ρ2 t).
Proof.
unfold compose, substitute.
induction t as [x|f ts IH]; trivial.
simpl; f_equal.
induction ts as [|n t ts IH']; trivial.
simpl.
destruct IH as [e IH].
rewrite e.
f_equal.
now apply IH'.
Qed.
End Dev.
In order to do this you need to use the operations of the monad, typically:
Set Implicit Arguments.
Unset Strict Implicit.
Unset Printing Implicit Defensive.
Section MonadKleisli.
(* Set Universe Polymorphism. // Needed for real use cases *)
Variable (M : Type -> Type).
Variable (Ma : forall A B, (A -> B) -> M A -> M B).
Variable (η : forall A, A -> M A).
Variable (μ : forall A, M (M A) -> M A).
(* Compose: o^* *)
Definition oStar A B C (f : A -> M B) (g: B -> M C) : A -> M C :=
fun x => μ (Ma g (f x)).
(* Bind *)
Definition bind A B (x : M A) (f : A -> M B) : M B := oStar (fun _ => x) f tt.
End MonadKleisli.
Depending on how you organize your definitions, proving your desired properties will likely require functional extensionality, not a big deal usually but something to keep in ind.

How do I write Functor and Applicative for Vector-Composition?

I´m trying to learn more about dependent types using IDRIS.
The example I am trying to emulate uses composition of Vectors.
I understand Functor and Applicative implementations for Vectors but I am struggling to implement them for the Composition.
data Vector : Nat -> Type -> Type where
Nil : Vector Z a
(::) : a -> Vector n a -> Vector (S n) a
Functor (Vector n) where
map f [] = []
map f (x::xs) = f x :: map f xs
Applicative (Vector n) where
pure = replicate _
fs <*> vs = zipWith apply fs vs
Now the Composition and Decomposition-Function look like this:
data (:++) : (b -> c) -> (a -> b) -> a -> Type where
Comp : (f . g) x -> (f :++ g) x
unComp : (f :++ g) a -> (f . g) a
unComp (Comp a) = a
User with Vectors it encapsulates a Vector of Vectors.
Now I need an Applicative for the Composition (Vector n) :++ (Vector n).
I can´t even get Functor to work and am mainly trying to see what I´m doing wrong. I tried the following and, since Functor is already implemented for Vectors, that this would work
Functor ((Vector n) :++ (Vector n)) where
map f (Comp []) = Comp []
map f (Comp (x::xs)) = Comp ((f x) :: (map f (Comp xs)))
but the Compiler gives an Error-Message:
When checking an application of constructor Main.:::
Unifying a and Vector (S n) a would lead to infinite value
Isn´t unifying and element of type a and a Vector n a exactly the purpose of (::)?
I am obviously doing something wrong and I can´t get this to work. I also have the feeling it´s probably easy to solve, but after hours of reading and trying I still don´t get it.
If someone could give me advice or explain to me how the Functor and Applicative implementations could look like, I would be very grateful.
Update: Idris 2 now has this builtin. Functor for Compose, Applicative for Compose
I think you can implement a general instance of Functor and Applicative like with Haskell's Compose.
newtype Compose f g a = Compose { getCompose :: f (g a) }
instance (Functor f, Functor g) => Functor (Compose f g) where
fmap f (Compose x) = Compose (fmap (fmap f) x)
a <$ (Compose x) = Compose (fmap (a <$) x)
instance (Applicative f, Applicative g) => Applicative (Compose f g) where
pure x = Compose (pure (pure x))
Compose f <*> Compose x = Compose (liftA2 (<*>) f x)
liftA2 f (Compose x) (Compose y) =
Compose (liftA2 (liftA2 f) x y)
To answer your specific question (but don't do it this way):
Functor ((Vector n) :++ (Vector n)) where
map f (Comp x) = Comp $ map (map f) x

Idris: proof about concatenation of vectors

Assume I have the following idris source code:
module Source
import Data.Vect
--in order to avoid compiler confusion between Prelude.List.(++), Prelude.String.(++) and Data.Vect.(++)
infixl 0 +++
(+++) : Vect n a -> Vect m a -> Vect (n+m) a
v +++ w = v ++ w
--NB: further down in the question I'll assume this definition isn't needed because the compiler
-- will have enough context to disambiguate between these and figure out that Data.Vect.(++)
-- is the "correct" one to use.
lemma : reverse (n :: ns) +++ (n :: ns) = reverse ns +++ (n :: n :: ns)
lemma {ns = []} = Refl
lemma {ns = n' :: ns} = ?lemma_rhs
As shown, the base case for lemma is trivially Refl. But I can't seem to find a way to prove the inductive case: the repl "just" spits out the following
*source> :t lemma_rhs
phTy : Type
n1 : phTy
len : Nat
ns : Vect len phTy
n : phTy
-----------------------------------------
lemma_rhs : Data.Vect.reverse, go phTy
(S (S len))
(n :: n1 :: ns)
[n1, n]
ns ++
n :: n1 :: ns =
Data.Vect.reverse, go phTy (S len) (n1 :: ns) [n1] ns ++
n :: n :: n1 :: ns
I understand that phTy stands for "phantom type", the implicit type of the vectors I'm considering. I also understand that go is the name of the function defined in the where clause for the definition of the library function reverse.
Question
How can I continue the proof? Is my inductive strategy sound? Is there a better one?
Context
This has came up in one of my toy projects, where I try to define arbitrary tensors; specifically, this seems to be needed in order to define "full index contraction". I'll elaborate a little bit on that:
I define tensors in a way that's roughly equivalent to
data Tensor : (rank : Nat) -> (shape : Vector rank Nat) -> Type where
Scalar : a -> Tensor Z [] a
Vector : Vect n (Tensor rank shape a) -> Tensor (S rank) (n :: shape) a
glossing over the rest of the source code (since it isn't relevant, and it's quite long and uninteresting as of now), I was able to define the following functions
contractIndex : Num a =>
Tensor (r1 + (2 + r2)) (s1 ++ (n :: n :: s2)) a ->
Tensor (r1 + r2) (s1 ++ s2) a
tensorProduct : Num a =>
Tensor r1 s1 a ->
Tensor r2 s2 a ->
Tensor (r1 + r2) (s1 ++ s2) a
contractProduct : Num a =>
Tensor (S r1) s1 a ->
Tensor (S r2) ((last s1) :: s2) a ->
Tensor (r1 + r2) ((take r1 s1) ++ s2) a
and I'm working on this other one
fullIndexContraction : Num a =>
Tensor r (reverse ns) a ->
Tensor r ns a ->
Tensor 0 [] a
fullIndexContraction {r = Z} {ns = []} t s = t * s
fullIndexContraction {r = S r} {ns = n :: ns} t s = ?rhs
that should "iterate contractProduct as much as possible (that is, r times)"; equivalently, it could be possible to define it as tensorProduct composed with as many contractIndex as possible (again, that amount should be r).
I'm including all this becuse maybe it's easier to just solve this problem without proving the lemma above: if that were the case, I'd be fully satisfied as well. I just thought the "shorter" version above might be easier to deal with, since I'm pretty sure I'll be able to figure out the missing pieces myself.
The version of idris i'm using is 1.3.2-git:PRE (that's what the repl says when invoked from the command line).
Edit: xash's answer covers almost everything, and I was able to write the following functions
nreverse_id : (k : Nat) -> nreverse k = k
contractAllIndices : Num a =>
Tensor (nreverse k + k) (reverse ns ++ ns) a ->
Tensor Z [] a
contractAllProduct : Num a =>
Tensor (nreverse k) (reverse ns) a ->
Tensor k ns a ->
Tensor Z []
I also wrote a "fancy" version of reverse, let's call it fancy_reverse, that automatically rewrites nreverse k = k in its result. So I tried to write a function that doesn't have nreverse in its signature, something like
fancy_reverse : Vect n a -> Vect n a
fancy_reverse {n} xs =
rewrite sym $ nreverse_id n in
reverse xs
contract : Num a =>
{auto eql : fancy_reverse ns1 = ns2} ->
Tensor k ns1 a ->
Tensor k ns2 a ->
Tensor Z [] a
contract {eql} {k} {ns1} {ns2} t s =
flip contractAllProduct s $
rewrite sym $ nreverse_id k in
?rhs
now, the inferred type for rhs is Tensor (nreverse k) (reverse ns2) and I have in scope a rewrite rule for k = nreverse k, but I can't seem to wrap my head around how to rewrite the implicit eql proof to make this type check: am I doing something wrong?
The prelude Data.Vect.reverse is hard to reason about, because AFAIK the go helper function won't be resolved in the typechecker. The usual approach is to define oneself an easier reverse that doesn't need rewrite in the type level. Like here for example:
%hide Data.Vect.reverse
nreverse : Nat -> Nat
nreverse Z = Z
nreverse (S n) = nreverse n + 1
reverse : Vect n a -> Vect (nreverse n) a
reverse [] = []
reverse (x :: xs) = reverse xs ++ [x]
lemma : {xs : Vect n a} -> reverse (x :: xs) = reverse xs ++ [x]
lemma = Refl
As you can see, this definition is straight-forward enough, that this equivalent lemma can be solved without further work. Thus you can probably just match on the reverse ns in fullIndexContraction like in this example:
data Foo : Vect n Nat -> Type where
MkFoo : (x : Vect n Nat) -> Foo x
foo : Foo a -> Foo (reverse a) -> Nat
foo (MkFoo []) (MkFoo []) = Z
foo (MkFoo $ x::xs) (MkFoo $ reverse xs ++ [x]) =
x + foo (MkFoo xs) (MkFoo $ reverse xs)
To your comment: first, len = nreverse len must sometimes be used, but if you had rewrite on the type level (through the usual n + 1 = 1 + n shenanigans) you had the same problem (if not even with more complicated proofs, but this is just a guess.)
vectAppendAssociative is actually enough:
lemma2 : Main.reverse (n :: ns1) ++ ns2 = Main.reverse ns1 ++ (n :: ns2)
lemma2 {n} {ns1} {ns2} = sym $ vectAppendAssociative (reverse ns1) [n] ns2

Unfold/simp has no effect in a primrec type class instantiation proof

Up until several days ago, I always defined a type, and then proved theorems directly about the type. Now I'm trying to use type classes.
Problem
The problem is that I can't instantiate cNAT for my type myD below, and it appears it's because simp has no effect on the abstract function cNAT, which I've made concrete with my primrec function cNAT_myD. I can only guess what's happening because of the automation that happens after instance proof.
Questions
Q1: Below, at the statement instantiation myD :: (type) cNAT, can you tell me how to finish the proof, and why I can prove the following theorem, but not the type class proof, which requires injective?
theorem dNAT_1_to_1: "(dNAT n = dNAT m) ==> n = m"
assumes injective: "(cNAT n = cNAT m) ==> n = m"
Q2: This is not as important, but at the bottom is this statement:
instantiation myD :: (type) cNAT2
It involves another way I was trying to instantiate cNAT. Can you tell me why I get Failed to refine any pending goal at shows? I put some comments in the source to explain some of what I did to set it up. I used this slightly modified formula for the requirement injective:
assumes injective: "!!n m. (cNAT2 n = cNAT2 m) --> n = m"
Specifics
My contrived datatype is this, which may be useful to me someday: (Update: Well, for another example maybe. A good mental exercise is for me to try and figure out how I can actually get something inside a 'a myD list, other than []. With BNF, something like datatype_new 'a myD = myS "'a myD fset" gives me the warning that there's an unused type variable on the right-hand side)
datatype 'a myD = myL "'a myD list"
The type class is this, which requires an injective function from nat to 'a:
class cNAT =
fixes cNAT :: "nat => 'a"
assumes injective: "(cNAT n = cNAT m) ==> n = m"
dNAT: this non-type class version of cNAT works
fun get_myL :: "'a myD => 'a myD list" where
"get_myL (myL L) = L"
primrec dNAT :: "nat => 'a myD" where
"dNAT 0 = myL []"
|"dNAT (Suc n) = myL (myL [] # get_myL(dNAT n))"
fun myD2nat :: "'a myD => nat" where
"myD2nat (myL []) = 0"
|"myD2nat (myL (x # xs)) = Suc(myD2nat (myL xs))"
theorem left_inverse_1 [simp]:
"myD2nat(dNAT n) = n"
apply(induct n, auto)
by(metis get_myL.cases get_myL.simps)
theorem dNAT_1_to_1:
"(dNAT n = dNAT m) ==> n = m"
apply(induct n)
apply(simp) (*
The simp method expanded dNAT.*)
apply(metis left_inverse_1 myD2nat.simps(1))
by (metis left_inverse_1)
cNAT: type class version that I can't instantiate
instantiation myD :: (type) cNAT
begin
primrec cNAT_myD :: "nat => 'a myD" where
"cNAT_myD 0 = myL []"
|"cNAT_myD (Suc n) = myL (myL [] # get_myL(cNAT_myD n))"
instance
proof
fix n m :: nat
show "cNAT n = cNAT m ==> n = m"
apply(induct n)
apply(simp) (*
The simp method won't expand cNAT to cNAT_myD's definition.*)
by(metis injective)+ (*
Metis proved it without unfolding cNAT_myD. It's useless. Goals always remain,
and the type variables in the output panel are all weird.*)
oops
end
cNAT2: Failed to refine any pending goal at show
(*I define a variation of `injective` in which the `assumes` definition, the
goal, and the `show` statement are exactly the same, and that strange `fails
to refine any pending goal shows up.*)
class cNAT2 =
fixes cNAT2 :: "nat => 'a"
assumes injective: "!!n m. (cNAT2 n = cNAT2 m) --> n = m"
instantiation myD :: (type) cNAT2
begin
primrec cNAT2_myD :: "nat => 'a myD" where
"cNAT2_myD 0 = myL []"
|"cNAT2_myD (Suc n) = myL (myL [] # get_myL(cNAT2_myD n))"
instance
proof (*
goal: !!n m. cNAT2 n = cNAT2 m --> n = m.*)
show
"!!n m. cNAT2 n = cNAT2 m --> n = m"
(*Failed to refine any pending goal
Local statement fails to refine any pending goal
Failed attempt to solve goal by exported rule:
cNAT2 (n::nat) = cNAT2 (m::nat) --> n = m *)
Your function cNAT is polymorphic in its result type, but the type variable does not appear among the parameters. This often causes type inference to compute a type which is more general than you want. In your case for cNAT, Isabelle infers for the two occurrences of cNAT in the show statement the type nat => 'b for some 'b of sort cNAT, but their type in the goal is nat => 'a myD. You can see this in jEdit by Ctrl-hovering over the cNAT occurrences to inspect the types. In ProofGeneral, you can enable printing of types with using [[show_consts]].
Therefore, you have to explicitly constrain types in the show statement as follows:
fix n m
assume "(cNAT n :: 'a myD) = cNAT m"
then show "n = m"
Note that it is usually not a good idea to use Isabelle's meta-connectives !! and ==> inside a show statement, you better rephrase them using fix/assume/show.

Resources