Coq: proof of "lens is closed under composition" - functional-programming

I'm a functional programmer completely new to Coq. In fact, I've just finished Mike Naha's tutorial. Particularly, I'm interested in this language to formalize some laws related to optics. For example, I'd like to know how to encode that lens is closed under composition.
I started to deal with GetPut. The aforementioned tutorial doesn't cover records, but I think they're what I need to encode a lens. As far as I know, the composition method should be implemented as a Definition. Finally, I assume that my theorem needs to receive evidences of GetPut for the composing lenses, to prove that the same law holds for the resulting lens by means of an equality. Not very helpful, but this is my (incomplete and wrong) proof skeleton:
Record lens (S : Type) (A : Type) := mkLens {
get : S -> A;
put : S -> A -> S }.
Definition compose_ln (S A B : Type) (ln1: lens S A) (ln2 : lens A B) : lens S B :=
{| get := fun (s : S) => get ln1 s;
put := fun (s : S) (a : A) => put ln1 s (put ln2 (get ln1 s) a) |}.
Theorem closed_GetPut :
(forall S A B : Type,
(forall (s : S),
(forall (ln1 : lens S A) (ln2 : lens A B),
(exists GetPut_proof : ln1 (* ln1 holds GetPut? *)),
(exists GetPut_proof : ln2 (* ln2 holds GetPut? *)),
(put (compose_ln ln1 ln2) s (get (compose_ln ln1 ln2) s) = s)))).
Proof.
(* ... *)
Qed.
Having said so:
Am I in the right path to write this proof? Is there any other way to better approach it?
Where can I find similar examples to be used as guiding examples?
Based on my FP background, what's the better material to keep on learning Coq? (I was thinking on Software Foundations by Benjamin C. Pierce.)
Thank you!

This looks roughly reasonable, but it needs some tidying.
First, you might as well remove the two "exists" quantifiers (change them into plain implications).
Second, the types of the two last assumptions should not be just "ln1" and "ln2" but something like "very_well_behaved ln1" and "very_well_behaved ln2". (And then the type of the final result can be "very_well_behaved (compose ln2 ln2)".

After reading the first chapters of Logical Foundations (which I definitely recommend if your background is functional) I'm able to implement the proof that lens is closed under composition in Coq. Firstly, we need to define lens and compose:
Record lens (S A : Type) := {
get : S -> A;
put : S -> A -> S }.
Definition compose {S A B : Type} (ln1 : lens S A) (ln2 : lens A B) : lens S B :=
{| get := fun (s : S) => ln2.(get A B) (ln1.(get S A) s);
put := fun (s : S) (b : B) => ln1.(put S A) s (ln2.(put A B) (ln1.(get S A) s) b) |}.
(*) Notice the fixed syntax to access the fields of the record.
Then, we need to declare the involved properties, to finally reach very_well_behaved (suggested by #Benjamin Pierce in the accepted answer):
Definition get_put {S A : Type} (ln : lens S A) : Prop :=
forall (s : S), ln.(put S A) s (ln.(get S A) s) = s.
Definition put_get {S A : Type} (ln : lens S A) : Prop :=
forall (s : S) (a : A), ln.(get S A) (ln.(put S A) s a) = a.
Definition put_put {S A : Type} (ln : lens S A) : Prop :=
forall (s : S) (a1 : A) (a2 : A),
ln.(put S A) (ln.(put S A) s a1) a2 = ln.(put S A) s a2.
Definition very_well_behaved {S A : Type} (ln : lens S A) : Prop :=
get_put ln /\ put_get ln /\ put_put ln.
Once the properties are defined, the next (and biggest) challenge consists on proving that very well-behaved lenses are closed under composition. To do so, we can use the following theorem skeleton:
Theorem compose_is_closed : forall (S A B : Type) (ln1 : lens S A) (ln2 : lens A B),
very_well_behaved ln1 -> very_well_behaved ln2 -> very_well_behaved (compose ln1 ln2).
Proof.
(* tactics here *)
Qed.
Right now, my Coq knowledge is quite limited, that's why I don't include the tactics here, since I feel that my current implementation is still naive. Anyway, if you're curious about it, you can find the whole proof in this gist.

Related

Two Coq Problems SOS: one is about IndProp, another has something to do with recursion, I guess

(* 4. Let oddn and evenn be the predicates that test whether a given number
is odd or even. Show that the sum of an odd number with an even number is odd. *)
Inductive oddn : nat -> Prop :=
| odd1 : oddn 1
| odd2 : forall n, oddn n -> oddn (S (S n)).
Inductive evenn : nat -> Prop :=
| even1 : evenn 0
| even2 : forall n, evenn n -> evenn (S (S n)).
Theorem odd_add : forall n m, oddn n -> evenn m -> oddn (n + m).
Proof. intros. destruct m.
+ Search add. rewrite <- plus_n_O. apply H.
+ destruct H.
++ simpl. apply odd2.
I don't know how can I prove this theorem, since I can not link oddn with evenn.
(* 6. We call a natural number good if the sum of all
its digits is divisible by 5. For example 122 is good
but 93 is not. Define a function count such that
(count n) returns the number of good numbers smaller than
or equal to n. Here we assume that 0 <= n < 10000.
Hint: You may find the "let ... in" struct useful. You may
directly use the div and modulo functions defined in the
standard library of Coq. *)
Definition isGood(n:nat) : bool :=
Fixpoint count (n : nat) : nat :=
match n with
| 0 => 1
| S n' => if isGood n then 1 + count n'
else count n'
end.
Compute count 15.
Example count_test1 : count 15 = 3.
Proof. reflexivity. Qed.
Example count_test2 : count 2005 = 401.
Proof. reflexivity. Qed.
For the second problem, I got stuck because the recursion I defined won't be accepted by Coq(non-decreasing?).
I just got stuck with these two problems, can anyone work them out?
If you want to define independently oddnand even, you may prove a lemma which relates these two predicates, like:
Remark R : forall n, (evenn n <-> oddn (S n)) /\
(oddn n <-> evenn (S n)).
(* proof by induction on n *)
Then, it's easy to apply this remark for solving your first exercise.
Please note that you may define even and odd in several other ways:
as mutually inductive predicates
with existential quantifiers
define even, then oddin function of even
...
I don't understand the problem with the second exercise.
A few days ago, we discussed about a function sum_digits you can use (with modulo) to define isGood.
Your function count looks OK, but quite inefficient (with Peano natural numbers).

Idris: proof about concatenation of vectors

Assume I have the following idris source code:
module Source
import Data.Vect
--in order to avoid compiler confusion between Prelude.List.(++), Prelude.String.(++) and Data.Vect.(++)
infixl 0 +++
(+++) : Vect n a -> Vect m a -> Vect (n+m) a
v +++ w = v ++ w
--NB: further down in the question I'll assume this definition isn't needed because the compiler
-- will have enough context to disambiguate between these and figure out that Data.Vect.(++)
-- is the "correct" one to use.
lemma : reverse (n :: ns) +++ (n :: ns) = reverse ns +++ (n :: n :: ns)
lemma {ns = []} = Refl
lemma {ns = n' :: ns} = ?lemma_rhs
As shown, the base case for lemma is trivially Refl. But I can't seem to find a way to prove the inductive case: the repl "just" spits out the following
*source> :t lemma_rhs
phTy : Type
n1 : phTy
len : Nat
ns : Vect len phTy
n : phTy
-----------------------------------------
lemma_rhs : Data.Vect.reverse, go phTy
(S (S len))
(n :: n1 :: ns)
[n1, n]
ns ++
n :: n1 :: ns =
Data.Vect.reverse, go phTy (S len) (n1 :: ns) [n1] ns ++
n :: n :: n1 :: ns
I understand that phTy stands for "phantom type", the implicit type of the vectors I'm considering. I also understand that go is the name of the function defined in the where clause for the definition of the library function reverse.
Question
How can I continue the proof? Is my inductive strategy sound? Is there a better one?
Context
This has came up in one of my toy projects, where I try to define arbitrary tensors; specifically, this seems to be needed in order to define "full index contraction". I'll elaborate a little bit on that:
I define tensors in a way that's roughly equivalent to
data Tensor : (rank : Nat) -> (shape : Vector rank Nat) -> Type where
Scalar : a -> Tensor Z [] a
Vector : Vect n (Tensor rank shape a) -> Tensor (S rank) (n :: shape) a
glossing over the rest of the source code (since it isn't relevant, and it's quite long and uninteresting as of now), I was able to define the following functions
contractIndex : Num a =>
Tensor (r1 + (2 + r2)) (s1 ++ (n :: n :: s2)) a ->
Tensor (r1 + r2) (s1 ++ s2) a
tensorProduct : Num a =>
Tensor r1 s1 a ->
Tensor r2 s2 a ->
Tensor (r1 + r2) (s1 ++ s2) a
contractProduct : Num a =>
Tensor (S r1) s1 a ->
Tensor (S r2) ((last s1) :: s2) a ->
Tensor (r1 + r2) ((take r1 s1) ++ s2) a
and I'm working on this other one
fullIndexContraction : Num a =>
Tensor r (reverse ns) a ->
Tensor r ns a ->
Tensor 0 [] a
fullIndexContraction {r = Z} {ns = []} t s = t * s
fullIndexContraction {r = S r} {ns = n :: ns} t s = ?rhs
that should "iterate contractProduct as much as possible (that is, r times)"; equivalently, it could be possible to define it as tensorProduct composed with as many contractIndex as possible (again, that amount should be r).
I'm including all this becuse maybe it's easier to just solve this problem without proving the lemma above: if that were the case, I'd be fully satisfied as well. I just thought the "shorter" version above might be easier to deal with, since I'm pretty sure I'll be able to figure out the missing pieces myself.
The version of idris i'm using is 1.3.2-git:PRE (that's what the repl says when invoked from the command line).
Edit: xash's answer covers almost everything, and I was able to write the following functions
nreverse_id : (k : Nat) -> nreverse k = k
contractAllIndices : Num a =>
Tensor (nreverse k + k) (reverse ns ++ ns) a ->
Tensor Z [] a
contractAllProduct : Num a =>
Tensor (nreverse k) (reverse ns) a ->
Tensor k ns a ->
Tensor Z []
I also wrote a "fancy" version of reverse, let's call it fancy_reverse, that automatically rewrites nreverse k = k in its result. So I tried to write a function that doesn't have nreverse in its signature, something like
fancy_reverse : Vect n a -> Vect n a
fancy_reverse {n} xs =
rewrite sym $ nreverse_id n in
reverse xs
contract : Num a =>
{auto eql : fancy_reverse ns1 = ns2} ->
Tensor k ns1 a ->
Tensor k ns2 a ->
Tensor Z [] a
contract {eql} {k} {ns1} {ns2} t s =
flip contractAllProduct s $
rewrite sym $ nreverse_id k in
?rhs
now, the inferred type for rhs is Tensor (nreverse k) (reverse ns2) and I have in scope a rewrite rule for k = nreverse k, but I can't seem to wrap my head around how to rewrite the implicit eql proof to make this type check: am I doing something wrong?
The prelude Data.Vect.reverse is hard to reason about, because AFAIK the go helper function won't be resolved in the typechecker. The usual approach is to define oneself an easier reverse that doesn't need rewrite in the type level. Like here for example:
%hide Data.Vect.reverse
nreverse : Nat -> Nat
nreverse Z = Z
nreverse (S n) = nreverse n + 1
reverse : Vect n a -> Vect (nreverse n) a
reverse [] = []
reverse (x :: xs) = reverse xs ++ [x]
lemma : {xs : Vect n a} -> reverse (x :: xs) = reverse xs ++ [x]
lemma = Refl
As you can see, this definition is straight-forward enough, that this equivalent lemma can be solved without further work. Thus you can probably just match on the reverse ns in fullIndexContraction like in this example:
data Foo : Vect n Nat -> Type where
MkFoo : (x : Vect n Nat) -> Foo x
foo : Foo a -> Foo (reverse a) -> Nat
foo (MkFoo []) (MkFoo []) = Z
foo (MkFoo $ x::xs) (MkFoo $ reverse xs ++ [x]) =
x + foo (MkFoo xs) (MkFoo $ reverse xs)
To your comment: first, len = nreverse len must sometimes be used, but if you had rewrite on the type level (through the usual n + 1 = 1 + n shenanigans) you had the same problem (if not even with more complicated proofs, but this is just a guess.)
vectAppendAssociative is actually enough:
lemma2 : Main.reverse (n :: ns1) ++ ns2 = Main.reverse ns1 ++ (n :: ns2)
lemma2 {n} {ns1} {ns2} = sym $ vectAppendAssociative (reverse ns1) [n] ns2

Can I do “complex” mutual recursion in Coq without let-binding?

Consider the following pair of mutually recursive Coq data types, which represent a Forest of nonempty Trees. Each Branch of a Tree holds an extra boolean flag, which we can extract with isOK.
Inductive Forest a : Type
:= Empty : Forest a
| WithTree : Tree a -> Forest a -> Forest a
with Tree a : Type
:= Branch : bool -> a -> Forest a -> Tree a.
Arguments Empty {_}.
Arguments WithTree {_} _ _.
Arguments Branch {_} _ _ _.
Definition isOK {a} (t : Tree a) : bool :=
match t with
| Branch ok _ _ => ok
end.
Now, if we ignore this boolean flag, we can write a pair of mapping functions to apply a function to every value in a Forest or a Tree, and this works fine:
Fixpoint mapForest_always {a} (f : a -> a) (ts0 : Forest a) {struct ts0} : Forest a :=
match ts0 with
| Empty => Empty
| WithTree t ts => WithTree (mapTree_always f t) (mapForest_always f ts)
end
with mapTree_always {a} (f : a -> a) (t : Tree a) {struct t} : Tree a :=
match t with
| Branch ok x ts => Branch ok (f x) (mapForest_always f ts)
end.
However, suppose the boolean represents some validity check, which would be more complicated in real code. So we check the boolean first, and only actually recurse if necessary. This means we have three mutually recursive functions, but one of them is just handing the work along. Unfortunately, this doesn’t work:
Fail Fixpoint mapForest_bad {a} (f : a -> a) (ts0 : Forest a) {struct ts0} : Forest a :=
match ts0 with
| Empty => Empty
| WithTree t ts => WithTree (mapTree_bad f t) (mapForest_bad f ts)
end
with mapTree_bad {a} (f : a -> a) (t : Tree a) {struct t} : Tree a :=
if isOK t
then mapOKTree_bad f t
else t
with mapOKTree_bad {a} (f : a -> a) (t : Tree a) {struct t} : Tree a :=
match t with
| Branch ok x ts => Branch ok (f x) (mapForest_bad f ts)
end.
The problem is that mapTree_bad calls into mapOKTree_bad on an argument that isn’t actually smaller.
Except… all mapOKTree_bad is doing is an extra step after some preprocessing. This will always terminate, but Coq can’t see that. To persuade the termination checker, we can instead define mapOKTree_good, which is the same but is a local let-binding; then, the termination checker will see through the let-binding and allow us to define mapForest_good and mapTree_good. If we want to get mapOKTree_good, we can just use a plain old definition after we’ve defined the mutually recursive functions, which just has the same body as the let-binding:
Fixpoint mapForest_good {a} (f : a -> a) (ts0 : Forest a) {struct ts0} : Forest a :=
match ts0 with
| Empty => Empty
| WithTree t ts => WithTree (mapTree_good f t) (mapForest_good f ts)
end
with mapTree_good {a} (f : a -> a) (t : Tree a) {struct t} : Tree a :=
let mapOKTree_good {a} (f : a -> a) (t : Tree a) : Tree a :=
match t with
| Branch ok x ts => Branch ok (f x) (mapForest_good f ts)
end in
if isOK t
then mapOKTree_good f t
else t.
Definition mapOKTree_good {a} (f : a -> a) (t : Tree a) : Tree a :=
match t with
| Branch ok x ts => Branch ok (f x) (mapForest_good f ts)
end.
This works, but it’s not pretty. Is there any way to convince Coq’s termination checker to accept the _bad variants, or is the _good trick the best I’ve got? A command that does work for me, such as Program Fixpoint or Function, is a totally reasonable solution as well.
Very partial answer: we can refactor the two definitions of mapOKTree_good with an intermediate definition parameterized by mapForest_good just before it is defined.
Definition mapOKTree_good_ {a} mapForest_good
(f : a -> a) (t : Tree a) : Tree a :=
match t with
| Branch ok x ts => Branch ok (f x) (mapForest_good f ts)
end.
Fixpoint mapForest_good {a} (f : a -> a) (ts0 : Forest a) {struct ts0} : Forest a :=
match ts0 with
| Empty => Empty
| WithTree t ts => WithTree (mapTree_good f t) (mapForest_good f ts)
end
with mapTree_good {a} (f : a -> a) (t : Tree a) {struct t} : Tree a :=
if isOK t
then mapOKTree_good_ mapForest_good f t
else t.
Definition mapOKTree_good {a} := #mapOKTree_good_ a mapForest_good.

Nested recursion and `Program Fixpoint` or `Function`

I’d like to define the following function using Program Fixpoint or Function in Coq:
Require Import Coq.Lists.List.
Import ListNotations.
Require Import Coq.Program.Wf.
Require Import Recdef.
Inductive Tree := Node : nat -> list Tree -> Tree.
Fixpoint height (t : Tree) : nat :=
match t with
| Node x ts => S (fold_right Nat.max 0 (map height ts))
end.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map (fun t => mapTree f t) ts)
end.
Next Obligation.
Unfortunately, at this point I have a proof obligation height t < height (Node x ts) without knowing that t is a member of ts.
Similarly with Function instead of Program Fixpoint, only that Function detects the problem and aborts the definition:
Error:
the term fun t : Tree => mapTree f t can not contain a recursive call to mapTree
I would expect to get a proof obligation of In t ts → height t < height (Node x ts).
Is there a way of getting that that does not involve restructuring the function definition? (I know work-arounds that require inlining the definition of map here, for example – I’d like to avoid these.)
Isabelle
To justify that expectation, let me show what happens when I do the same in Isabelle, using the function command, which is (AFAIK) related to Coq’s Function command:
theory Tree imports Main begin
datatype Tree = Node nat "Tree list"
fun height where
"height (Node _ ts) = Suc (foldr max (map height ts) 0)"
function mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
by pat_completeness auto
termination
proof (relation "measure (λ(f,t). height t)")
show "wf (measure (λ(f, t). height t))" by auto
next
fix f :: "nat ⇒ nat" and x :: nat and ts :: "Tree list" and t
assume "t ∈ set ts"
thus "((f, t), (f, Node x ts)) ∈ measure (λ(f, t). height t)"
by (induction ts) auto
qed
In the termination proof, I get the assumption t ∈ set ts.
Note that Isabelle does not require a manual termination proof here, and the following definition works just fine:
fun mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
This works because the map function has a “congruence lemma” of the form
xs = ys ⟹ (⋀x. x ∈ set ys ⟹ f x = g x) ⟹ map f xs = map g ys
that the function command uses to find out that the termination proof only needs to consider t ∈ set ts..
If such a lemma is not available, e.g. because I define
definition "map' = map"
and use that in mapTree, I get the same unprovable proof obligation as in Coq. I can make it work again by declaring a congruence lemma for map', e.g. using
declare map_cong[folded map'_def,fundef_cong]
In this case, you actually do not need well-founded recursion in its full generality:
Require Import Coq.Lists.List.
Set Implicit Arguments.
Inductive tree := Node : nat -> list tree -> tree.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
Coq is able to figure out by itself that recursive calls to map_tree are performed on strict subterms. However, proving anything about this function is difficult, as the induction principle generated for tree is not useful:
tree_ind :
forall P : tree -> Prop,
(forall (n : nat) (l : list tree), P (Node n l)) ->
forall t : tree, P t
This is essentially the same problem you described earlier. Luckily, we can fix the issue by proving our own induction principle with a proof term.
Require Import Coq.Lists.List.
Import ListNotations.
Unset Elimination Schemes.
Inductive tree := Node : nat -> list tree -> tree.
Set Elimination Schemes.
Fixpoint tree_ind
(P : tree -> Prop)
(IH : forall (n : nat) (ts : list tree),
fold_right (fun t => and (P t)) True ts ->
P (Node n ts))
(t : tree) : P t :=
match t with
| Node n ts =>
let fix loop ts :=
match ts return fold_right (fun t' => and (P t')) True ts with
| [] => I
| t' :: ts' => conj (tree_ind P IH t') (loop ts')
end in
IH n ts (loop ts)
end.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
The Unset Elimination Schemes command prevents Coq from generating its default (and not useful) induction principle for tree. The occurrence of fold_right on the induction hypothesis simply expresses that the predicate P holds of every tree t' appearing in ts.
Here is a statement that you can prove using this induction principle:
Lemma map_tree_comp f g t :
map_tree f (map_tree g t) = map_tree (fun n => f (g n)) t.
Proof.
induction t as [n ts IH]; simpl; f_equal.
induction ts as [|t' ts' IHts]; try easy.
simpl in *.
destruct IH as [IHt' IHts'].
specialize (IHts IHts').
now rewrite IHt', <- IHts.
Qed.
You can now do this with Equations and get the right elimination principle automatically, using either structural nested recursion or well-founded recursion
In general, it might be advisable to avoid this problem. But if one really wants to obtain the proof obligation that Isabelle gives you, here is a way:
In Isabelle, we can give an external lemma that stats that map applies its arguments only to members of the given list. In Coq, we cannot do this in an external lemma, but we can do it in the type. So instead of the normal type of map
forall A B, (A -> B) -> list A -> list B
we want the type to say “f is only ever applied to elements of the list:
forall A B (xs : list A), (forall x : A, In x xs -> B) -> list B
(It requires reordering the argument so that the type of f can mention xs).
Writing this function is not trivial, and I found it easier to use a proof script:
Definition map {A B} (xs : list A) (f : forall (x:A), In x xs -> B) : list B.
Proof.
induction xs.
* exact [].
* refine (f a _ :: IHxs _).
- left. reflexivity.
- intros. eapply f. right. eassumption.
Defined.
But you can also write it “by hand”:
Fixpoint map {A B} (xs : list A) : forall (f : forall (x:A), In x xs -> B), list B :=
match xs with
| [] => fun _ => []
| x :: xs => fun f => f x (or_introl eq_refl) :: map xs (fun y h => f y (or_intror h))
end.
In either case, the result is nice: I can use this function in mapTree, i.e.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map ts (fun t _ => mapTree f t))
end.
Next Obligation.
and I don’t have to do anything with the new argument to f, but it shows up in the the termination proof obligation, as In t ts → height t < height (Node x ts) as desired. So I can prove that and define mapTree:
simpl.
apply Lt.le_lt_n_Sm.
induction ts; inversion_clear H.
- subst. apply PeanoNat.Nat.le_max_l.
- rewrite IHts by assumption.
apply PeanoNat.Nat.le_max_r.
Qed.
It only works with Program Fixpoint, not with Function, unfortunately.

Structural recursion on a dependent parameter

I'm trying to write the sieve of Eratosthenes in Coq. I have a function crossout : forall {n:nat}, vector bool n -> nat -> vector bool n. When the sieve finds a number that is prime, it uses crossout to mark all the numbers that are not prime and then recurses on the resulting vector. The sieve obviously can't be structurally recursive on the vector itself, but it is structurally recursive on the length of the vector. What I want is to do something like this:
Fixpoint sieve {n:nat} (v:vector bool n) (acc:nat) {struct n} : list nat :=
match v with
| [] => Datatypes.nil
| false :: v' => sieve v' (S acc)
| true :: v' => Datatypes.cons acc (sieve (crossout v' acc) (S acc))
end.
But if I write it like this, Coq complains that the length of v' is not a subterm of n. I know that it is, but no matter how I structure the function, I can't seem to convince Coq that it is. Does anyone know how I can?
This is one of the most common pitfalls with dependent types in Coq. What is happening intuitively is that as soon as you pattern match on v, Coq "forgets" that the length of that vector is actually n, and loses the connection between the length of v' and the predecessor of n. The solution here is to apply what Adam Chlipala calls the convoy pattern, and make the pattern match return a function. While it is possible to do it by pattern matching on v, I think it is easier to do it by pattern matching on n:
Require Import Vector.
Axiom crossout : forall {n}, t bool n -> nat -> t bool n.
Fixpoint sieve {n:nat} : t bool n -> nat -> list nat :=
match n with
| 0 => fun _ _ => Datatypes.nil
| S n' => fun v acc =>
if hd v then
Datatypes.cons acc (sieve (crossout (tl v) acc) (S acc))
else
sieve (tl v) (S acc)
end.
Notice how the header of sieve has changed a little bit: now the return type is actually a function to help Coq's type inference.
For more information, check out Adam's book: http://adam.chlipala.net/cpdt/html/MoreDep.html.

Resources