How to get an induction principle for nested fix - recursion

I am working with a function that searches through a range of values.
Require Import List.
(* Implementation of ListTest omitted. *)
Definition ListTest (l : list nat) := false.
Definition SearchCountList n :=
(fix f i l := match i with
| 0 => ListTest (rev l)
| S i1 =>
(fix g j l1 := match j with
| 0 => false
| S j1 =>
if f i1 (j :: l1)
then true
else g j1 l1
end) (n + n) (i :: l)
end) n nil
.
I want to be able to reason about this function.
However, I can't seem to get coq's built-in induction principle facilities to work.
Functional Scheme SearchCountList := Induction for SearchCountList Sort Prop.
Error: GRec not handled
It looks like coq is set up for handling mutual recursion, not nested recursion. In this case, I have essentially 2 nested for loops.
However, translating to mutual recursion isn't so easy either:
Definition SearchCountList_Loop :=
fix outer n i l {struct i} :=
match i with
| 0 => ListTest (rev l)
| S i1 => inner n i1 (n + n) (i :: l)
end
with inner n i j l {struct j} :=
match j with
| 0 => false
| S j1 =>
if outer n i (j :: l)
then true
else inner n i j1 l
end
for outer
.
but that yields the error
Recursive call to inner has principal argument equal to
"n + n" instead of "i1".
So, it looks like I would need to use measure to get it to accept the definition directly. It is confused that I reset j sometimes. But, in a nested set up, that makes sense, since i has decreased, and i is the outer loop.
So, is there a standard way of handling nested recursion, as opposed to mutual recursion? Are there easier ways to reason about the cases, not involving making separate induction theorems? Since I haven't found a way to generate it automatically, I guess I'm stuck with writing the induction principle directly.

There's a trick for avoiding mutual recursion in this case: you can compute f i1 inside f and pass the result to g.
Fixpoint g (f_n_i1 : list nat -> bool) (j : nat) (l1 : list nat) : bool :=
match j with
| 0 => false
| S j1 => if f_n_i1 (j :: l1) then true else g f_n_i1 j1 l1
end.
Fixpoint f (n i : nat) (l : list nat) : bool :=
match i with
| 0 => ListTest (rev l)
| S i1 => g (f n i1) (n + n) (i :: l)
end.
Definition SearchCountList (n : nat) : bool := f n n nil.
Are you sure simple induction wouldn't have been enough in the original code? What about well founded induction?

Related

Idris: proof about concatenation of vectors

Assume I have the following idris source code:
module Source
import Data.Vect
--in order to avoid compiler confusion between Prelude.List.(++), Prelude.String.(++) and Data.Vect.(++)
infixl 0 +++
(+++) : Vect n a -> Vect m a -> Vect (n+m) a
v +++ w = v ++ w
--NB: further down in the question I'll assume this definition isn't needed because the compiler
-- will have enough context to disambiguate between these and figure out that Data.Vect.(++)
-- is the "correct" one to use.
lemma : reverse (n :: ns) +++ (n :: ns) = reverse ns +++ (n :: n :: ns)
lemma {ns = []} = Refl
lemma {ns = n' :: ns} = ?lemma_rhs
As shown, the base case for lemma is trivially Refl. But I can't seem to find a way to prove the inductive case: the repl "just" spits out the following
*source> :t lemma_rhs
phTy : Type
n1 : phTy
len : Nat
ns : Vect len phTy
n : phTy
-----------------------------------------
lemma_rhs : Data.Vect.reverse, go phTy
(S (S len))
(n :: n1 :: ns)
[n1, n]
ns ++
n :: n1 :: ns =
Data.Vect.reverse, go phTy (S len) (n1 :: ns) [n1] ns ++
n :: n :: n1 :: ns
I understand that phTy stands for "phantom type", the implicit type of the vectors I'm considering. I also understand that go is the name of the function defined in the where clause for the definition of the library function reverse.
Question
How can I continue the proof? Is my inductive strategy sound? Is there a better one?
Context
This has came up in one of my toy projects, where I try to define arbitrary tensors; specifically, this seems to be needed in order to define "full index contraction". I'll elaborate a little bit on that:
I define tensors in a way that's roughly equivalent to
data Tensor : (rank : Nat) -> (shape : Vector rank Nat) -> Type where
Scalar : a -> Tensor Z [] a
Vector : Vect n (Tensor rank shape a) -> Tensor (S rank) (n :: shape) a
glossing over the rest of the source code (since it isn't relevant, and it's quite long and uninteresting as of now), I was able to define the following functions
contractIndex : Num a =>
Tensor (r1 + (2 + r2)) (s1 ++ (n :: n :: s2)) a ->
Tensor (r1 + r2) (s1 ++ s2) a
tensorProduct : Num a =>
Tensor r1 s1 a ->
Tensor r2 s2 a ->
Tensor (r1 + r2) (s1 ++ s2) a
contractProduct : Num a =>
Tensor (S r1) s1 a ->
Tensor (S r2) ((last s1) :: s2) a ->
Tensor (r1 + r2) ((take r1 s1) ++ s2) a
and I'm working on this other one
fullIndexContraction : Num a =>
Tensor r (reverse ns) a ->
Tensor r ns a ->
Tensor 0 [] a
fullIndexContraction {r = Z} {ns = []} t s = t * s
fullIndexContraction {r = S r} {ns = n :: ns} t s = ?rhs
that should "iterate contractProduct as much as possible (that is, r times)"; equivalently, it could be possible to define it as tensorProduct composed with as many contractIndex as possible (again, that amount should be r).
I'm including all this becuse maybe it's easier to just solve this problem without proving the lemma above: if that were the case, I'd be fully satisfied as well. I just thought the "shorter" version above might be easier to deal with, since I'm pretty sure I'll be able to figure out the missing pieces myself.
The version of idris i'm using is 1.3.2-git:PRE (that's what the repl says when invoked from the command line).
Edit: xash's answer covers almost everything, and I was able to write the following functions
nreverse_id : (k : Nat) -> nreverse k = k
contractAllIndices : Num a =>
Tensor (nreverse k + k) (reverse ns ++ ns) a ->
Tensor Z [] a
contractAllProduct : Num a =>
Tensor (nreverse k) (reverse ns) a ->
Tensor k ns a ->
Tensor Z []
I also wrote a "fancy" version of reverse, let's call it fancy_reverse, that automatically rewrites nreverse k = k in its result. So I tried to write a function that doesn't have nreverse in its signature, something like
fancy_reverse : Vect n a -> Vect n a
fancy_reverse {n} xs =
rewrite sym $ nreverse_id n in
reverse xs
contract : Num a =>
{auto eql : fancy_reverse ns1 = ns2} ->
Tensor k ns1 a ->
Tensor k ns2 a ->
Tensor Z [] a
contract {eql} {k} {ns1} {ns2} t s =
flip contractAllProduct s $
rewrite sym $ nreverse_id k in
?rhs
now, the inferred type for rhs is Tensor (nreverse k) (reverse ns2) and I have in scope a rewrite rule for k = nreverse k, but I can't seem to wrap my head around how to rewrite the implicit eql proof to make this type check: am I doing something wrong?
The prelude Data.Vect.reverse is hard to reason about, because AFAIK the go helper function won't be resolved in the typechecker. The usual approach is to define oneself an easier reverse that doesn't need rewrite in the type level. Like here for example:
%hide Data.Vect.reverse
nreverse : Nat -> Nat
nreverse Z = Z
nreverse (S n) = nreverse n + 1
reverse : Vect n a -> Vect (nreverse n) a
reverse [] = []
reverse (x :: xs) = reverse xs ++ [x]
lemma : {xs : Vect n a} -> reverse (x :: xs) = reverse xs ++ [x]
lemma = Refl
As you can see, this definition is straight-forward enough, that this equivalent lemma can be solved without further work. Thus you can probably just match on the reverse ns in fullIndexContraction like in this example:
data Foo : Vect n Nat -> Type where
MkFoo : (x : Vect n Nat) -> Foo x
foo : Foo a -> Foo (reverse a) -> Nat
foo (MkFoo []) (MkFoo []) = Z
foo (MkFoo $ x::xs) (MkFoo $ reverse xs ++ [x]) =
x + foo (MkFoo xs) (MkFoo $ reverse xs)
To your comment: first, len = nreverse len must sometimes be used, but if you had rewrite on the type level (through the usual n + 1 = 1 + n shenanigans) you had the same problem (if not even with more complicated proofs, but this is just a guess.)
vectAppendAssociative is actually enough:
lemma2 : Main.reverse (n :: ns1) ++ ns2 = Main.reverse ns1 ++ (n :: ns2)
lemma2 {n} {ns1} {ns2} = sym $ vectAppendAssociative (reverse ns1) [n] ns2

Nested recursion and `Program Fixpoint` or `Function`

I’d like to define the following function using Program Fixpoint or Function in Coq:
Require Import Coq.Lists.List.
Import ListNotations.
Require Import Coq.Program.Wf.
Require Import Recdef.
Inductive Tree := Node : nat -> list Tree -> Tree.
Fixpoint height (t : Tree) : nat :=
match t with
| Node x ts => S (fold_right Nat.max 0 (map height ts))
end.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map (fun t => mapTree f t) ts)
end.
Next Obligation.
Unfortunately, at this point I have a proof obligation height t < height (Node x ts) without knowing that t is a member of ts.
Similarly with Function instead of Program Fixpoint, only that Function detects the problem and aborts the definition:
Error:
the term fun t : Tree => mapTree f t can not contain a recursive call to mapTree
I would expect to get a proof obligation of In t ts → height t < height (Node x ts).
Is there a way of getting that that does not involve restructuring the function definition? (I know work-arounds that require inlining the definition of map here, for example – I’d like to avoid these.)
Isabelle
To justify that expectation, let me show what happens when I do the same in Isabelle, using the function command, which is (AFAIK) related to Coq’s Function command:
theory Tree imports Main begin
datatype Tree = Node nat "Tree list"
fun height where
"height (Node _ ts) = Suc (foldr max (map height ts) 0)"
function mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
by pat_completeness auto
termination
proof (relation "measure (λ(f,t). height t)")
show "wf (measure (λ(f, t). height t))" by auto
next
fix f :: "nat ⇒ nat" and x :: nat and ts :: "Tree list" and t
assume "t ∈ set ts"
thus "((f, t), (f, Node x ts)) ∈ measure (λ(f, t). height t)"
by (induction ts) auto
qed
In the termination proof, I get the assumption t ∈ set ts.
Note that Isabelle does not require a manual termination proof here, and the following definition works just fine:
fun mapTree where
"mapTree f (Node x ts) = Node (f x) (map (λ t. mapTree f t) ts)"
This works because the map function has a “congruence lemma” of the form
xs = ys ⟹ (⋀x. x ∈ set ys ⟹ f x = g x) ⟹ map f xs = map g ys
that the function command uses to find out that the termination proof only needs to consider t ∈ set ts..
If such a lemma is not available, e.g. because I define
definition "map' = map"
and use that in mapTree, I get the same unprovable proof obligation as in Coq. I can make it work again by declaring a congruence lemma for map', e.g. using
declare map_cong[folded map'_def,fundef_cong]
In this case, you actually do not need well-founded recursion in its full generality:
Require Import Coq.Lists.List.
Set Implicit Arguments.
Inductive tree := Node : nat -> list tree -> tree.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
Coq is able to figure out by itself that recursive calls to map_tree are performed on strict subterms. However, proving anything about this function is difficult, as the induction principle generated for tree is not useful:
tree_ind :
forall P : tree -> Prop,
(forall (n : nat) (l : list tree), P (Node n l)) ->
forall t : tree, P t
This is essentially the same problem you described earlier. Luckily, we can fix the issue by proving our own induction principle with a proof term.
Require Import Coq.Lists.List.
Import ListNotations.
Unset Elimination Schemes.
Inductive tree := Node : nat -> list tree -> tree.
Set Elimination Schemes.
Fixpoint tree_ind
(P : tree -> Prop)
(IH : forall (n : nat) (ts : list tree),
fold_right (fun t => and (P t)) True ts ->
P (Node n ts))
(t : tree) : P t :=
match t with
| Node n ts =>
let fix loop ts :=
match ts return fold_right (fun t' => and (P t')) True ts with
| [] => I
| t' :: ts' => conj (tree_ind P IH t') (loop ts')
end in
IH n ts (loop ts)
end.
Fixpoint map_tree (f : nat -> nat) (t : tree) : tree :=
match t with
| Node x ts => Node (f x) (map (fun t => map_tree f t) ts)
end.
The Unset Elimination Schemes command prevents Coq from generating its default (and not useful) induction principle for tree. The occurrence of fold_right on the induction hypothesis simply expresses that the predicate P holds of every tree t' appearing in ts.
Here is a statement that you can prove using this induction principle:
Lemma map_tree_comp f g t :
map_tree f (map_tree g t) = map_tree (fun n => f (g n)) t.
Proof.
induction t as [n ts IH]; simpl; f_equal.
induction ts as [|t' ts' IHts]; try easy.
simpl in *.
destruct IH as [IHt' IHts'].
specialize (IHts IHts').
now rewrite IHt', <- IHts.
Qed.
You can now do this with Equations and get the right elimination principle automatically, using either structural nested recursion or well-founded recursion
In general, it might be advisable to avoid this problem. But if one really wants to obtain the proof obligation that Isabelle gives you, here is a way:
In Isabelle, we can give an external lemma that stats that map applies its arguments only to members of the given list. In Coq, we cannot do this in an external lemma, but we can do it in the type. So instead of the normal type of map
forall A B, (A -> B) -> list A -> list B
we want the type to say “f is only ever applied to elements of the list:
forall A B (xs : list A), (forall x : A, In x xs -> B) -> list B
(It requires reordering the argument so that the type of f can mention xs).
Writing this function is not trivial, and I found it easier to use a proof script:
Definition map {A B} (xs : list A) (f : forall (x:A), In x xs -> B) : list B.
Proof.
induction xs.
* exact [].
* refine (f a _ :: IHxs _).
- left. reflexivity.
- intros. eapply f. right. eassumption.
Defined.
But you can also write it “by hand”:
Fixpoint map {A B} (xs : list A) : forall (f : forall (x:A), In x xs -> B), list B :=
match xs with
| [] => fun _ => []
| x :: xs => fun f => f x (or_introl eq_refl) :: map xs (fun y h => f y (or_intror h))
end.
In either case, the result is nice: I can use this function in mapTree, i.e.
Program Fixpoint mapTree (f : nat -> nat) (t : Tree) {measure (height t)} : Tree :=
match t with
Node x ts => Node (f x) (map ts (fun t _ => mapTree f t))
end.
Next Obligation.
and I don’t have to do anything with the new argument to f, but it shows up in the the termination proof obligation, as In t ts → height t < height (Node x ts) as desired. So I can prove that and define mapTree:
simpl.
apply Lt.le_lt_n_Sm.
induction ts; inversion_clear H.
- subst. apply PeanoNat.Nat.le_max_l.
- rewrite IHts by assumption.
apply PeanoNat.Nat.le_max_r.
Qed.
It only works with Program Fixpoint, not with Function, unfortunately.

Well founded recursion in Coq

I am trying to write a function for computing natural division in Coq and I am having some trouble defining it since it is not structural recursion.
My code is:
Inductive N : Set :=
| O : N
| S : N -> N.
Inductive Bool : Set :=
| True : Bool
| False : Bool.
Fixpoint sum (m :N) (n : N) : N :=
match m with
| O => n
| S x => S ( sum x n)
end.
Notation "m + n" := (sum m n) (at level 50, left associativity).
Fixpoint mult (m :N) (n : N) : N :=
match m with
| O => O
| S x => n + (mult x n)
end.
Notation "m * n" := (mult m n) (at level 40, left associativity).
Fixpoint pred (m : N) : N :=
match m with
| O => S O
| S x => x
end.
Fixpoint resta (m:N) (n:N) : N :=
match n with
| O => m
| S x => pred (resta m x)
end.
Notation "m - x" := (resta m x) (at level 50, left associativity).
Fixpoint leq_nat (m : N) (n : N) : Bool :=
match m with
| O => True
| S x => match n with
| O => False
| S y => leq_nat x y
end
end.
Notation "m <= n" := (leq_nat m n) (at level 70).
Fixpoint div (m : N) (n : N) : N :=
match n with
| O => O
| S x => match m <= n with
| False => O
| True => pred (div (m-n) n)
end
end.
As you can see, Coq does not like my function div, it says
Error: Cannot guess decreasing argument of fix.
How can I supply in Coq a termination proof for this function? I can prove that if n>O ^ n<=m -> (m-n) < m.
The simplest strategy in this case is probably to use the Program extension together with a measure. You will then have to provide a proof that the arguments used in the recursive call are smaller than the top level ones according to the measure.
Require Coq.Program.Tactics.
Require Coq.Program.Wf.
Fixpoint toNat (m : N) : nat :=
match m with O => 0 | S n => 1 + (toNat n) end.
Program Fixpoint div (m : N) (n : N) {measure (toNat m)}: N :=
match n with
| O => O
| S x => match m <= n with
| False => O
| True => pred (div (m-n) n)
end
end.
Next Obligation.
(* your proof here *)
Although gallais's answer is definitely the way to go in general, I should point out that we can define division on the natural numbers in Coq as a simple fixpoint. Here, I'm using the definition of nat in the standard library for simplicity.
Fixpoint minus (n m : nat) {struct n} : nat :=
match n, m with
| S n', S m' => minus n' m'
| _, _ => n
end.
Definition leq (n m : nat) : bool :=
match minus n m with
| O => true
| _ => false
end.
Fixpoint div (n m : nat) {struct n} : nat :=
match m with
| O => O
| S m' =>
if leq (S m') n then
match n with
| O => O (* Impossible *)
| S n' => S (div (minus n' m') (S m'))
end
else O
end.
Compute div 6 3.
Compute div 7 3.
Compute div 9 3.
The definition of minus is essentially the one from the standard library. Notice on the second branch of that definition we return n. Thanks to this trick, Coq's termination checker can detect that minus n' m' is structurally smaller than S n', which allows us to perform the recursive call to div.
There's actually an even simpler way of doing this, although a bit harder to understand: you can check whether the divisor is smaller and perform the recursive call in a single step.
(* Divide n by m + 1 *)
Fixpoint div'_aux n m {struct n} :=
match minus n m with
| O => O
| S n' => S (div'_aux n' m)
end.
Definition div' n m :=
match m with
| O => O (* Arbitrary *)
| S m' => div'_aux n m'
end.
Compute div' 6 3.
Compute div' 7 3.
Compute div' 9 3.
Once again, because of the form of the minus function, Coq's termination checker knows that n' in the second branch of div'_aux is a valid argument to a recursive call. Notice also that div'_aux is dividing by m + 1.
Of course, this whole thing relies on a clever trick that requires understanding the termination checker in detail. In general, you have to resort to well-founded recursion, as gallais showed.

Structural recursion on a dependent parameter

I'm trying to write the sieve of Eratosthenes in Coq. I have a function crossout : forall {n:nat}, vector bool n -> nat -> vector bool n. When the sieve finds a number that is prime, it uses crossout to mark all the numbers that are not prime and then recurses on the resulting vector. The sieve obviously can't be structurally recursive on the vector itself, but it is structurally recursive on the length of the vector. What I want is to do something like this:
Fixpoint sieve {n:nat} (v:vector bool n) (acc:nat) {struct n} : list nat :=
match v with
| [] => Datatypes.nil
| false :: v' => sieve v' (S acc)
| true :: v' => Datatypes.cons acc (sieve (crossout v' acc) (S acc))
end.
But if I write it like this, Coq complains that the length of v' is not a subterm of n. I know that it is, but no matter how I structure the function, I can't seem to convince Coq that it is. Does anyone know how I can?
This is one of the most common pitfalls with dependent types in Coq. What is happening intuitively is that as soon as you pattern match on v, Coq "forgets" that the length of that vector is actually n, and loses the connection between the length of v' and the predecessor of n. The solution here is to apply what Adam Chlipala calls the convoy pattern, and make the pattern match return a function. While it is possible to do it by pattern matching on v, I think it is easier to do it by pattern matching on n:
Require Import Vector.
Axiom crossout : forall {n}, t bool n -> nat -> t bool n.
Fixpoint sieve {n:nat} : t bool n -> nat -> list nat :=
match n with
| 0 => fun _ _ => Datatypes.nil
| S n' => fun v acc =>
if hd v then
Datatypes.cons acc (sieve (crossout (tl v) acc) (S acc))
else
sieve (tl v) (S acc)
end.
Notice how the header of sieve has changed a little bit: now the return type is actually a function to help Coq's type inference.
For more information, check out Adam's book: http://adam.chlipala.net/cpdt/html/MoreDep.html.

Summation in functional programming

I was searching in the web for exclusion-Inclusion principle, what i have found is this:
(from MathWorld - A Wolfram Web Resource: wolfram.com)
http://mathworld.wolfram.com/Inclusion-ExclusionPrinciple.html
I doesn't matter if you don't understand the formula, in fact, what i need is to implement this:
For example, the input is:
(summation (list 1 2) 3)
Where (list 1 2) is i and j and 3 is the limit of the sum n.
(n had to be up the sigma but...)
Then, the output of formula, in Scheme will be:
(list (list 1 2) (list 1 3) (list 2 3))
How can i implemment this in Scheme or in Haskell? (sorry for my English).
In Haskell, use a list comprehension:
Prelude> [(i,j) | i <- [1..4], j <- [i+1..4]]
[(1,2),(1,3),(1,4),(2,3),(2,4),(3,4)]
Prelude> [i * j | i <- [1..4], j <- [i+1..4]]
[2,3,4,6,8,12]
Prelude> sum [i * j | i <- [1..4], j <- [i+1..4]]
35
First line gives all a list of all pairs (i,j) where 1 <= i < j <= 4
Second line gives a list of i*j where 1 <= i < j <= 4
Third line gives sum of these values: Σ1 <= i < j <= 4 i*j.
In racket, you'd probably use a list comprehension:
#lang racket
(for*/sum ([i (in-range 1 5)]
[j (in-range (add1 i) 5)])
(* i j))
The core functionality you need for a simple implementation of the inclusion-exclusion principle is to generate all k-element subsets of the index set. Using lists, that is an easy recursion:
pick :: Int -> [a] -> [[a]]
pick 0 _ = [[]] -- There is exactly one 0-element subset of any set
pick _ [] = [] -- No way to pick any nonzero number of elements from an empty set
pick k (x:xs) = map (x:) (pick (k-1) xs) ++ pick k xs
-- There are two groups of k-element subsets of a set containing x,
-- those that contain x and those that do not
If pick is not a local function whose calls are 100% under your control, you should add a check that the Int parameter is never negative (you could use Word for that parameter, then that's built into the type).
If k is largish, checking against the length of the list to pick from prevents a lot of fruitless recursion, so it's better to build that in from the start:
pick :: Int -> [a] -> [[a]]
pick k xs = choose k (length xs) xs
choose :: Int -> Int -> [a] -> [[a]]
choose 0 _ _ = [[]]
choose k l xs
| l < k = [] -- we want to choose more than we have
| l == k = [xs] -- we want exactly as many as we have
| otherwise = case xs of
[] -> error "This ought to be impossible, l == length xs should hold"
(y:ys) -> map (y:) (choose (k-1) (l-1) ys) ++ choose k (l-1) ys
The inclusion-exclusion formula then becomes
inclusionExclusion indices
= sum . zipWith (*) (cycle [1,-1]) $
[sum (map count $ pick k indices) | k <- [1 .. length indices]]
where count list counts the number of elements of the intersection of [subset i | i <- list]. Of course, you need an efficient way to calculate that, or it would be more efficient to find the size of the union directly.
There's much room for optimisation, and there are different ways to do it, but that's a fairly short and direct translation of the principle.
Here is a possible way with Scheme. I've made the following function to create quantification
#lang racket
(define (quantification next test op e)
{lambda (A B f-terme)
(let loop ([i A] [resultat e])
(if [test i B]
resultat
(loop (next i) (op (f-terme i) resultat)) ))})
With this function you can create sum, product, generalized union and generalized intersection.
;; Arithmetic example
(define sumQ (quantification add1 > + 0))
(define productQ (quantification add1 > * 1))
;; Sets example with (require
(define (unionQ set-of-sets)
(let [(empty-set (set))
(list-of-sets (set->list set-of-sets))
]
((quantification cdr eq? set-union empty-set) list-of-sets
'()
car)))
(define (intersectionQ set-of-sets)
(let [(empty-set (set))
(list-of-sets (set->list set-of-sets))
]
((quantification cdr eq? set-intersect (car list-of-sets)) (cdr list-of-sets)
'()
car)))
This way you can do
(define setA2 (set 'a 'b))
(define setA5 (set 'a 'b 'c 'd 'e))
(define setC3 (set 'c 'd 'e))
(define setE3 (set 'e 'f 'g))
(unionQ (set setA2 setC3 setE3))
(intersectionQ (set setA5 setC3 setE3))
I work on something similar in Haskell
module Quantification where
quantifier next test op =
let loop e a b f = if (test a b)
then e
else loop (op (f a) e) (next a) b f
in loop
quantifier_on_integer_set = quantifier (+1) (>)
sumq = quantifier_on_integer_set (+) 0
prodq = quantifier_on_integer_set (*) 1
But I never go further... Probably that you can start from this however.

Resources