Understanding Assignment Solution in Agda - functional-programming

Consider the following extracted piece of code for proving the "Unicity of Typing" for variable in Agda:
unicity : ∀ {Γ₁ Γ₂ e τ₁ τ₂} → (Γ₁ ⊢ e ∷ τ₁) → (Γ₂ ⊢ e ∷ τ₂) → (Γ₁ ≈ Γ₂) → (τ₁ ∼ τ₂)
unicity (VarT here) (VarT here) (_ , ( τ∼ , _ )) = τ∼
unicity (VarT here) (VarT (ski`p {α = α} lk2)) (s≡s' , ( _ , _ )) = ⊥-elim (toWitnessFalse α (toWitness` s≡s'))
unicity (VarT (skip {α = α} lk1)) (VarT here) (s'≡s , ( _ , _ )) = ⊥-elim (toWitnessFalse α (toWitness s'≡s))
unicity (VarT (skip lk1)) (VarT (skip lk2)) (_ ,( _ , Γ≈ )) = unicity (VarT lk1) (VarT lk2) Γ≈
I need an explanation on the working of ⊥-elim , toWitnessFalse and toWitness. Also, what do the expressions ⊤ and ⊥ mean/stand for?

⊥ is the empty type, so (in a total, consistent language) you can never construct a value of type ⊥. But this also means that any proposition you can think of, follows from ⊥. This is what ⊥-elim witnesses:
⊥-elim : ∀ {w} {Whatever : Set w} → ⊥ → Whatever
This is very useful in practice because you might be writing proofs under some assumption, and some of those assumptions might be ⊥, or they might be negative statements (A → ⊥ for some A) and you can prove the A as well, etc. Then, what you find out is effectively that you don't have to care about that particular branch anymore, since it is impossible; but then, just because you don't care, you still have to formally satisfy the result type somehow. This is what ⊥-elim gives you.
toWitness's type and related definitions are as follows:
T : Bool → Set
T true = ⊤
T false = ⊥
⌊_⌋ : ∀ {p} {P : Set p} → Dec P → Bool
⌊ yes _ ⌋ = true
⌊ no _ ⌋ = false
True : ∀ {p} {P : Set p} → Dec P → Set
True Q = T ⌊ Q ⌋
toWitness : ∀ {p} {P : Set p} {Q : Dec P} → True Q → P
Given a Q : Dec P, True Q is either ⊤ (if Q = yes _) or ⊥ (if Q = no _). The only way to call toWitness, then, is to have Q say that P is true and pass the trivial unit constructor tt : ⊤; the only other possibility would be to have Q say that P is false, and pass as an argument a ⊥ but as we've seen, that's not possible. In summary, toWitness says that if Q tells us the decision that P holds, then we can get a proof of P from Q.
toWitnessFalse is exactly the same with the roles reversed: if Q tells us the decision that P doesn't hold, then we can get a proof of ¬ P from Q.

Related

Prove recursive function exists using only `nat_ind`

I'm trying to prove the following in Coq:
∀ B: Type, ∀ a: B, ∀ b: nat -> B -> B, ∃ f: nat -> B, f 0 = a ∧ ∀ n: nat, f (S n) = b n (f n).
Which implies that a fairly general class of recursive functions exist. I know that I can construct that function using Fixpoint items or fix expressions, but I want to not use it, and instead use nat_ind defined with this type:
∀ P: nat → Prop, P 0 → (∀ n: nat, P n → P (S n)) → ∀ n: nat, P n
I believe this is possible since nat_ind behaves like a recursion combinator. But I didn't figured it out how to prove it. The problem is that the induction variable is inside of ∃ f guard, and I don't have access to it. I'm able to prove something like this:
∀ B: Type, ∀ a: B, ∀ b: nat -> B -> B, ∀ m: nat,
∃ f: nat -> B, f 0 = a ∧ ∀ n: nat, n < m -> f (S n) = b n (f n)
But it doesn't help in proving the original one I think.
Is it possible to prove the original one without using fix directly? I'm ok with using double negation and other well-known axioms if needed. Using nat_rec and nat_rect is also fine, but only as an opaque axiom. Precisely, using those are fine:
Axiom nat_rec2: ∀ P : nat → Set, P 0 → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n.
Axiom nat_rect2: ∀ P : nat → Type, P 0 → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n.
The problem seems to be to obtain recursion from the following axiomatization of nat:
Parameter nat : Type.
Parameter O : nat.
Parameter S : nat -> nat.
Parameter disjoint_O_S : forall n, O <> S n.
Parameter injective_S : forall n n', S n = S n' -> n = n'.
Parameter nat_rect : forall P: nat -> Type, P O -> (forall n: nat, P n -> P (S n)) -> forall n : nat, P n.
Where the main issue is that the nat_rect axiom has no computational behavior, so although we might define a recursor B -> (nat -> B -> B) -> nat -> B as nat_rect (fun _ => B), we can't prove anything about it.
The solution is to first encode the graph of the desired recursive function f as a relation, and then use nat_rect to produce a dependent pair, of a value that is going to be f n with evidence that that value is in the graph of f.
Section Rec.
Context (B : Type) (a : B) (b : nat -> B -> B).
Inductive graph : nat -> B -> Prop :=
| recO : graph O a
| recS n y : graph n y -> graph (S n) (b n y)
.
Lemma graph_fun : forall n, { y | forall y', y = y' <-> graph n y' }.
Proof.
induction n as [ | n IH ] using nat_rect.
- exists a; split.
+ intros <-. constructor.
+ inversion 1; [ reflexivity | ]. contradiction (disjoint_O_S n); auto.
- destruct IH as [y IH]. exists (b n y); split.
+ intros <-. constructor. apply IH. auto.
+ inversion 1; subst. contradiction (disjoint_O_S n); auto.
apply injective_S in H0. subst.
apply IH in H1. subst; auto.
Qed.
Theorem nat_rec : exists (f : nat -> B), f O = a /\ forall n, f (S n) = b n (f n).
Proof.
exists (fun n => proj1_sig (graph_fun n)). split.
- apply (proj2_sig (graph_fun O)). constructor.
- intros n. apply (proj2_sig (graph_fun (S n))).
constructor. apply (proj2_sig (graph_fun n)).
reflexivity.
Qed.
End Rec.
If you have the Prop inductor nat_ind instead of nat_rect, that same technique can be adapted by also assuming the axiom constructive_indefinite_description (which actually lets you reconstruct nat_rect, but here you can more simply apply it at the beginning of graph_fun):
From Coq Require Import IndefiniteDescription.
About constructive_indefinite_description.
(*
constructive_indefinite_description :
forall (A : Type) (P : A->Prop),
(exists x, P x) -> { x : A | P x }.
*)

Defining a function to a subset of the codomain

I am trying to define the image restriction of a function
f : A → B as f': A → f[A], where f'(a) = f(a) . However, I am not sure how to define it in a lean.
In my opinion, the most intuitive way to define it is:
def fun_to_image {A B: Type*} (f: A → B): A → image f set.univ :=
λ a, f a
However, this gets rejected because (f a) is of type B not (image f set.univ).
I even tried proving that f(a) ∈ (image f univ) . It didn't help:
def fun_to_image (f : A → B) : A → image f univ :=
λ a ,
have h : f a ∈ image f univ :=
exists.intro a
(and.intro trivial (eq.refl (f a))),
f a
The error message is:
type mismatch, term
λ (a : A), f a
has type
A → B
but is expected to have type
A → ↥(f '' univ)
set.univ and image are defined as follows in data.set
def univ : set α :=
λ a, true
def image (f : α → β) (s : set α) : set β :=
{b | ∃ a, a ∈ s ∧ f a = b}
Any idea how this can be done?
You are almost there (-;
There is a little “warning sign” in the error message.
but is expected to have type
A → ↥(f '' univ)
You can see the creepy up-arrow ↥. Let me explain what it means:
As you have recalled, image f set.univ is defined as a subset. Since you are treating it as a type, it is automatically coerced into a so-called subtype: if s : set X, then the corresponding subtype s has terms of the form ⟨x, h⟩ (type these as \< and \> in VScode), where x : X and h : x ∈ s.
This “coercion to type” is indicated by the ↥.
So, to finish your definition, you will have to write ⟨f a, h⟩, instead of f a.
Note that in main library there is also a definition of range (here) which is meant to be used in place of image _ set.univ.
It already comes with (L1167)
def range_factorization (f : ι → β) : ι → range f :=
λ i, ⟨f i, mem_range_self i⟩

Preserving functor positivity when going via product vs. vector

In the following code, the definition of μ₁ is accepted by Agda as a strictly positive functor, which makes sense. If I tie the knot via a product, as in μ₂, it is still accepted. However, if I try to go via a vector, as in μ₃, it is not accepted anymore.
data F : Set where
X : F
⟦_⟧₁ : F → Set → Set
⟦ X ⟧₁ A = A
data μ₁ (f : F) : Set where
Fix₁ : ⟦ f ⟧₁ (μ₁ f) → μ₁ f
open import Data.Product
⟦_⟧₂ : F → (Set × Set) → Set
⟦ X₁ ⟧₂ (A , _) = A
open import Data.Unit
data μ₂ (f : F) : Set where
Fix₂ : ⟦ f ⟧₂ (μ₂ f , ⊤) → μ₂ f
open import Data.Nat
open import Data.Vec
⟦_⟧₃ : ∀ {n} → F → Vec Set (suc n) → Set
⟦ X ⟧₃ (A ∷ _) = A
data μ₃ (f : F) : Set where
Fix₃ : ⟦ f ⟧₃ [ μ₃ f ] → μ₃ f
The error message for μ₃ is
μ₃ is not strictly positive, because it occurs
in the third argument to ⟦_⟧₃
in the type of the constructor Fix₃
in the definition of μ₃.
What is the fundamental difference between μ₂ and μ₃? Is there a way to get something like μ₃ working?
I'm mostly guessing. _×_ is a record and Vec is a data. Agda rejects μ₂, when _×_ is defined as a data:
data Pair (A B : Set₁) : Set₁
where pair : A -> B -> Pair A B
⟦_⟧₃ : F → Pair Set Set → Set
⟦ X ⟧₃ (pair A _) = A
data μ₃ (f : F) : Set where
Fix₃ : ⟦ f ⟧₃ (pair (μ₃ f) ⊤) → μ₃ f
Results in "μ₃ is not strictly positive, because it occurs...". But if you define ⟦_⟧₃ as
⟦_⟧₃ : F → Pair Set Set → Set
⟦ X ⟧₃ _ = ⊤
or
⟦_⟧₃ : F → Pair Set Set → Set
⟦ _ ⟧₃ (pair A _) = A
then everything is OK (your μ₂ is a bit misleading, since there is no pattern matching on F too). In the second case Agda just normalizes the expression, since there is no pattern matching on the first argument and the second is in the WHNF, so ⟦_⟧₃ is totally eliminated. But I don't know, how Agda resolves the first case. Something ad hoc, I suppose.
Your μ₂ typechecks, because Agda eliminates pattern matching on records:
map : {A B : Set} {P : A → Set} {Q : B → Set}
(f : A → B) → (∀ {x} → P x → Q (f x)) →
Σ A P → Σ B Q
map f g (x , y) = (f x , g y)
The clause above is internally translated into the following one:
map f g p = (f (Σ.proj₁ p) , g (Σ.proj₂ p))
So it's just like the
⟦_⟧₃ : F → Pair Set Set → Set
⟦ X ⟧₃ _ = ⊤
case.
Also, ⟦_⟧₃ will typecheck, if you remove pattern matching on the first argument.
UPDATE
No, it's not about pattern matching elimination, since this definition
data Pair (A B : Set₁) : Set₁
where pair : A -> B -> Pair A B
fst : ∀ {A B} -> Pair A B -> A
fst (pair x y) = x
⟦_⟧₃ : F → Pair Set Set → Set
⟦ X ⟧₃ p = fst p
data μ₃ (f : F) : Set where
Fix₃ : ⟦ f ⟧₃ (pair (μ₃ f) ⊤) → μ₃ f
is rejected too.

Termination-checking of function over a trie

I'm having difficulty convincing Agda to termination-check the function fmap below and similar functions defined recursively over the structure of a Trie. A Trie is a trie whose domain is a Type, an object-level type formed from unit, products and fixed points (I've omitted coproducts to keep the code minimal). The problem seems to relate to a type-level substitution I use in the definition of Trie. (The expression const (μₜ τ) * τ means apply the substitution const (μₜ τ) to the type τ.)
module Temp where
open import Data.Unit
open import Category.Functor
open import Function
open import Level
open import Relation.Binary
-- A context is just a snoc-list.
data Cxt {𝒂} (A : Set 𝒂) : Set 𝒂 where
ε : Cxt A
_∷ᵣ_ : Cxt A → A → Cxt A
-- Context membership.
data _∈_ {𝒂} {A : Set 𝒂} (a : A) : Cxt A → Set 𝒂 where
here : ∀ {Δ} → a ∈ Δ ∷ᵣ a
there : ∀ {Δ a′} → a ∈ Δ → a ∈ Δ ∷ᵣ a′
infix 3 _∈_
-- Well-formed types, using de Bruijn indices.
data _⊦ (Δ : Cxt ⊤) : Set where
nat : Δ ⊦
𝟏 : Δ ⊦
var : _ ∈ Δ → Δ ⊦
_+_ _⨰_ : Δ ⊦ → Δ ⊦ → Δ ⊦
μ : Δ ∷ᵣ _ ⊦ → Δ ⊦
infix 3 _⊦
-- A closed type.
Type : Set
Type = ε ⊦
-- Type-level substitutions and renamings.
Sub Ren : Rel (Cxt ⊤) zero
Sub Δ Δ′ = _ ∈ Δ → Δ′ ⊦
Ren Δ Δ′ = ∀ {x} → x ∈ Δ → x ∈ Δ′
-- Renaming extension.
extendᵣ : ∀ {Δ Δ′} → Ren Δ Δ′ → Ren (Δ ∷ᵣ _) (Δ′ ∷ᵣ _)
extendᵣ ρ here = here
extendᵣ ρ (there x) = there (ρ x)
-- Lift a type renaming to a type.
_*ᵣ_ : ∀ {Δ Δ′} → Ren Δ Δ′ → Δ ⊦ → Δ′ ⊦
_ *ᵣ nat = nat
_ *ᵣ 𝟏 = 𝟏
ρ *ᵣ (var x) = var (ρ x)
ρ *ᵣ (τ₁ + τ₂) = (ρ *ᵣ τ₁) + (ρ *ᵣ τ₂)
ρ *ᵣ (τ₁ ⨰ τ₂) = (ρ *ᵣ τ₁) ⨰ (ρ *ᵣ τ₂)
ρ *ᵣ (μ τ) = μ (extendᵣ ρ *ᵣ τ)
-- Substitution extension.
extend : ∀ {Δ Δ′} → Sub Δ Δ′ → Sub (Δ ∷ᵣ _) (Δ′ ∷ᵣ _)
extend θ here = var here
extend θ (there x) = there *ᵣ (θ x)
-- Lift a type substitution to a type.
_*_ : ∀ {Δ Δ′} → Sub Δ Δ′ → Δ ⊦ → Δ′ ⊦
θ * nat = nat
θ * 𝟏 = 𝟏
θ * var x = θ x
θ * (τ₁ + τ₂) = (θ * τ₁) + (θ * τ₂)
θ * (τ₁ ⨰ τ₂) = (θ * τ₁) ⨰ (θ * τ₂)
θ * μ τ = μ (extend θ * τ)
data Trie {𝒂} (A : Set 𝒂) : Type → Set 𝒂 where
〈〉 : A → 𝟏 ▷ A
〔_,_〕 : ∀ {τ₁ τ₂} → τ₁ ▷ A → τ₂ ▷ A → τ₁ + τ₂ ▷ A
↑_ : ∀ {τ₁ τ₂} → τ₁ ▷ τ₂ ▷ A → τ₁ ⨰ τ₂ ▷ A
roll : ∀ {τ} → (const (μ τ) * τ) ▷ A → μ τ ▷ A
infixr 5 Trie
syntax Trie A τ = τ ▷ A
{-# NO_TERMINATION_CHECK #-}
fmap : ∀ {a} {A B : Set a} {τ} → (A → B) → τ ▷ A → τ ▷ B
fmap f (〈〉 x) = 〈〉 (f x)
fmap f 〔 σ₁ , σ₂ 〕 = 〔 fmap f σ₁ , fmap f σ₂ 〕
fmap f (↑ σ) = ↑ (fmap (fmap f) σ)
fmap f (roll σ) = roll (fmap f σ)
It would seem that fmap recurses into a strictly smaller argument in each case; certainly the product case is fine if I remove recursive types. On the other hand, the definition handles recursive types fine if I remove products.
What's the simplest way to proceed here? The inline/fuse trick does not look particularly applicable, but maybe it is. Or should I be looking for another way to deal with the substitution in the definition of Trie?
The inline/fuse trick can be applied in (perhaps) surprising way. This trick is suited for problems of this sort:
data Trie (A : Set) : Set where
nil : Trie A
node : A → List (Trie A) → Trie A
map-trie : {A B : Set} → (A → B) → Trie A → Trie B
map-trie f nil = nil
map-trie f (node x xs) = node (f x) (map (map-trie f) xs)
This function is structurally recursive, but in a hidden way. map just applies map-trie f to the elements of xs, so map-trie gets applied to smaller (sub-)tries. But Agda doesn't look through the definition of map to see that it doesn't do anything funky. So we must apply the inline/fuse trick to get it past termination checker:
map-trie : {A B : Set} → (A → B) → Trie A → Trie B
map-trie f nil = nil
map-trie {A} {B} f (node x xs) = node (f x) (map′ xs)
where
map′ : List (Trie A) → List (Trie B)
map′ [] = []
map′ (x ∷ xs) = map-trie f x ∷ map′ xs
Your fmap function shares the same structure, you map a lifted function of some sort. But what to inline? If we follow the example above, we should inline fmap itself. This looks and feels a bit strange, but indeed, it works:
fmap fmap′ : ∀ {a} {A B : Set a} {τ} → (A → B) → τ ▷ A → τ ▷ B
fmap f (〈〉 x) = 〈〉 (f x)
fmap f 〔 σ₁ , σ₂ 〕 = 〔 fmap f σ₁ , fmap f σ₂ 〕
fmap f (↑ σ) = ↑ (fmap (fmap′ f) σ)
fmap f (roll σ) = roll (fmap f σ)
fmap′ f (〈〉 x) = 〈〉 (f x)
fmap′ f 〔 σ₁ , σ₂ 〕 = 〔 fmap′ f σ₁ , fmap′ f σ₂ 〕
fmap′ f (↑ σ) = ↑ (fmap′ (fmap f) σ)
fmap′ f (roll σ) = roll (fmap′ f σ)
There's another technique you can apply: it's called sized types. Instead of relying on the compiler to figure out when somethig is or is not structurally recursive, you instead specify it directly. However, you have to index your data types by a Size type, so this approach is fairly intrusive and cannot be applied to already existing types, but I think it is worth mentioning.
In its simplest form, sized type behaves as a type indexed by a natural number. This index specifies the upper bound of structural size. You can think of this as an upper bound for the height of a tree (given that the data type is an F-branching tree for some functor F). Sized version of List looks almost like a Vec, for example:
data SizedList (A : Set) : ℕ → Set where
[] : ∀ {n} → SizedList A n
_∷_ : ∀ {n} → A → SizedList A n → SizedList A (suc n)
But sized types add few features that make them easier to use. You have a constant ∞ for the case when you don't care about the size. suc is called ↑ and Agda implements few rules, such as ↑ ∞ = ∞.
Let's rewrite the Trie example to use sized types. We need a pragma at the top of the file and one import:
{-# OPTIONS --sized-types #-}
open import Size
And here's the modified data type:
data Trie (A : Set) : {i : Size} → Set where
nil : ∀ {i} → Trie A {↑ i}
node : ∀ {i} → A → List (Trie A {i}) → Trie A {↑ i}
If you leave the map-trie function as is, the termination checker is still going to complain. That's because when you don't specify any size, Agda will fill in infinity (i.e. don't-care value) and we are back at the beginning.
However, we can mark map-trie as size-preserving:
map-trie : ∀ {i A B} → (A → B) → Trie A {i} → Trie B {i}
map-trie f nil = nil
map-trie f (node x xs) = node (f x) (map (map-trie f) xs)
So, if you give it a Trie bounded by i, it will give you another Trie bounded by i as well. So map-trie can never make the Trie larger, only equally large or smaller. This is enough for the termination checker to figure out that map (map-trie f) xs is okay.
This technique can also be applied to your Trie:
open import Size
renaming (↑_ to ^_)
data Trie {𝒂} (A : Set 𝒂) : {i : Size} → Type → Set 𝒂 where
〈〉 : ∀ {i} → A →
Trie A {^ i} 𝟏
〔_,_〕 : ∀ {i τ₁ τ₂} → Trie A {i} τ₁ → Trie A {i} τ₂ →
Trie A {^ i} (τ₁ + τ₂)
↑_ : ∀ {i τ₁ τ₂} → Trie (Trie A {i} τ₂) {i} τ₁ →
Trie A {^ i} (τ₁ ⨰ τ₂)
roll : ∀ {i τ} → Trie A {i} (const (μ τ) * τ) →
Trie A {^ i} (μ τ)
infixr 5 Trie
syntax Trie A τ = τ ▷ A
fmap : ∀ {i 𝒂} {A B : Set 𝒂} {τ} → (A → B) → Trie A {i} τ → Trie B {i} τ
fmap f (〈〉 x) = 〈〉 (f x)
fmap f 〔 σ₁ , σ₂ 〕 = 〔 fmap f σ₁ , fmap f σ₂ 〕
fmap f (↑ σ) = ↑ fmap (fmap f) σ
fmap f (roll σ) = roll (fmap f σ)

Assisting Agda's termination checker

Suppose we define a function
f : N \to N
f 0 = 0
f (s n) = f (n/2) -- this / operator is implemented as floored division.
Agda will paint f in salmon because it cannot tell if n/2 is smaller than n. I don't know how to tell Agda's termination checker anything. I see in the standard library they have a floored division by 2 and a proof that n/2 < n. However, I still fail to see how to get the termination checker to realize that recursion has been made on a smaller subproblem.
Agda's termination checker only checks for structural recursion (i.e. calls that happen on structurally smaller arguments) and there's no way to establish that certain relation (such as _<_) implies that one of the arguments is structurally smaller.
Digression: Similar problem happens with positivity checker. Consider the standard fix-point data type:
data μ_ (F : Set → Set) : Set where
fix : F (μ F) → μ F
Agda rejects this because F may not be positive in its first argument. But we cannot restrict μ to only take positive type functions, or show that some particular type function is positive.
How do we normally show that a recursive functions terminates? For natural numbers, this is the fact that if the recursive call happens on strictly smaller number, we eventually have to reach zero and the recursion stops; for lists the same holds for its length; for sets we could use the strict subset relation; and so on. Notice that "strictly smaller number" doesn't work for integers.
The property that all these relations share is called well-foundedness. Informally speaking, a relation is well-founded if it doesn't have any infinite descending chains. For example, < on natural numbers is well founded, because for any number n:
n > n - 1 > ... > 2 > 1 > 0
That is, the length of such chain is limited by n + 1.
≤ on natural numbers, however, is not well-founded:
n ≥ n ≥ ... ≥ n ≥ ...
And neither is < on integers:
n > n - 1 > ... > 1 > 0 > -1 > ...
Does this help us? It turns out we can encode what it means for a relation to be well-founded in Agda and then use it to implement your function.
For simplicity, I'm going to bake the _<_ relation into the data type. First of all, we must define what it means for a number to be accessible: n is accessible if all m such that m < n are also accessible. This of course stops at n = 0, because there are no m so that m < 0 and this statement holds trivially.
data Acc (n : ℕ) : Set where
acc : (∀ m → m < n → Acc m) → Acc n
Now, if we can show that all natural numbers are accessible, then we showed that < is well-founded. Why is that so? There must be a finite number of the acc constructors (i.e. no infinite descending chain) because Agda won't let us write infinite recursion. Now, it might seem as if we just pushed the problem back one step further, but writing the well-foundedness proof is actually structurally recursive!
So, with that in mind, here's the definition of < being well-founded:
WF : Set
WF = ∀ n → Acc n
And the well-foundedness proof:
<-wf : WF
<-wf n = acc (go n)
where
go : ∀ n m → m < n → Acc m
go zero m ()
go (suc n) zero _ = acc λ _ ()
go (suc n) (suc m) (s≤s m<n) = acc λ o o<sm → go n o (trans o<sm m<n)
Notice that go is nicely structurally recursive. trans can be imported like this:
open import Data.Nat
open import Relation.Binary
open DecTotalOrder decTotalOrder
using (trans)
Next, we need a proof that ⌊ n /2⌋ ≤ n:
/2-less : ∀ n → ⌊ n /2⌋ ≤ n
/2-less zero = z≤n
/2-less (suc zero) = z≤n
/2-less (suc (suc n)) = s≤s (trans (/2-less n) (right _))
where
right : ∀ n → n ≤ suc n
right zero = z≤n
right (suc n) = s≤s (right n)
And finally, we can write your f function. Notice how it suddenly becomes structurally recursive thanks to Acc: the recursive calls happen on arguments with one acc constructor peeled off.
f : ℕ → ℕ
f n = go _ (<-wf n)
where
go : ∀ n → Acc n → ℕ
go zero _ = 0
go (suc n) (acc a) = go ⌊ n /2⌋ (a _ (s≤s (/2-less _)))
Now, having to work directly with Acc isn't very nice. And that's where Dominique's answer comes in. All this stuff I've written here has already been done in the standard library. It is more general (the Acc data type is actually parametrized over the relation) and it allows you to just use <-rec without having to worry about Acc.
Taking a more closer look, we are actually pretty close to the generic solution. Let's see what we get when we parametrize over the relation. For simplicity I'm not dealing with universe polymorphism.
A relation on A is just a function taking two As and returning Set (we could call it binary predicate):
Rel : Set → Set₁
Rel A = A → A → Set
We can easily generalize Acc by changing the hardcoded _<_ : ℕ → ℕ → Set to an arbitrary relation over some type A:
data Acc {A} (_<_ : Rel A) (x : A) : Set where
acc : (∀ y → y < x → Acc _<_ y) → Acc _<_ x
The definition of well-foundedness changes accordingly:
WellFounded : ∀ {A} → Rel A → Set
WellFounded _<_ = ∀ x → Acc _<_ x
Now, since Acc is an inductive data type like any other, we should be able to write its eliminator. For inductive types, this is a fold (much like foldr is eliminator for lists) - we tell the eliminator what to do with each constructor case and the eliminator applies this to the whole structure.
In this case, we'll do just fine with the simple variant:
foldAccSimple : ∀ {A} {_<_ : Rel A} {R : Set} →
(∀ x → (∀ y → y < x → R) → R) →
∀ z → Acc _<_ z → R
foldAccSimple {R = R} acc′ = go
where
go : ∀ z → Acc _ z → R
go z (acc a) = acc′ z λ y y<z → go y (a y y<z)
If we know that _<_ is well-founded, we can skip the Acc _<_ z argument completly, so let's write small convenience wrapper:
recSimple : ∀ {A} {_<_ : Rel A} → WellFounded _<_ → {R : Set} →
(∀ x → (∀ y → y < x → R) → R) →
A → R
recSimple wf acc′ z = foldAccSimple acc′ z (wf z)
And finally:
<-wf : WellFounded _<_
<-wf = {- same definition -}
<-rec = recSimple <-wf
f : ℕ → ℕ
f = <-rec go
where
go : ∀ n → (∀ m → m < n → ℕ) → ℕ
go zero _ = 0
go (suc n) r = r ⌊ n /2⌋ (s≤s (/2-less _))
And indeed, this looks (and works) almost like the one in the standard library!
Here's the fully dependent version in case you are wondering:
foldAcc : ∀ {A} {_<_ : Rel A} (P : A → Set) →
(∀ x → (∀ y → y < x → P y) → P x) →
∀ z → Acc _<_ z → P z
foldAcc P acc′ = go
where
go : ∀ z → Acc _ z → P z
go _ (acc a) = acc′ _ λ _ y<z → go _ (a _ y<z)
rec : ∀ {A} {_<_ : Rel A} → WellFounded _<_ →
(P : A → Set) → (∀ x → (∀ y → y < x → P y) → P x) →
∀ z → P z
rec wf P acc′ z = foldAcc P acc′ _ (wf z)
I would like to offer a slightly different answer than the ones given above. In particular, I want to suggest that instead of trying to somehow convince the termination checker that actually, no, this recursion is perfectly fine, we should instead try to reify the well-founded-ness so that the recursion is manifestly fine in virtue of being structural.
The idea here is that the problem comes from being unable to see that n / 2 is somehow a "part" of n. Structural recursion wants to break a thing into its immediate parts, but the way that n / 2 is a "part" of n is that we drop every other suc. But it's not obvious up front how many to drop, we have to look around and try to line things up. What would be nice is if we had some type that had constructors for "multiple" sucs.
To make the problem slightly more interesting, let's instead try to define the function that behaves like
f : ℕ → ℕ
f 0 = 0
f (suc n) = 1 + (f (n / 2))
that is to say, it should be the case that
f n = ⌈ log₂ (n + 1) ⌉
Now naturally the above definition won't work, for the same reasons your f won't. But let's pretend that it did, and let's explore the "path", so to speak, that the argument would take through the natural numbers. Suppose we look at n = 8:
f 8 = 1 + f 4 = 1 + 1 + f 2 = 1 + 1 + 1 + f 1 = 1 + 1 + 1 + 1 + f 0 = 1 + 1 + 1 + 1 + 0 = 4
so the "path" is 8 -> 4 -> 2 -> 1 -> 0. What about, say, 11?
f 11 = 1 + f 5 = 1 + 1 + f 2 = ... = 4
so the "path" is 11 -> 5 -> 2 -> 1 -> 0.
Well naturally what's going on here is that at each step we're either dividing by 2, or subtracting one and dividing by 2. Every naturally number greater than 0 can be decomposed uniquely in this fashion. If it's even, divide by two and proceed, if it's odd, subtract one and divide by two and proceed.
So now we can see exactly what our data type should look like. We need a type that has a constructor that means "twice as many suc's" and another that means "twice as many suc's plus one", as well as of course a constructor that means "zero sucs":
data Decomp : ℕ → Set where
zero : Decomp zero
2*_ : ∀ {n} → Decomp n → Decomp (n * 2)
2*_+1 : ∀ {n} → Decomp n → Decomp (suc (n * 2))
We can now define the function that decomposes a natural number into the Decomp that corresponds to it:
decomp : (n : ℕ) → Decomp n
decomp zero = zero
decomp (suc n) = decomp n +1
It helps to define +1 for Decomps:
_+1 : {n : ℕ} → Decomp n → Decomp (suc n)
zero +1 = 2* zero +1
(2* d) +1 = 2* d +1
(2* d +1) +1 = 2* (d +1)
Given a Decomp, we can flatten it down into a natural number that ignores the distinctions between 2*_ and 2*_+1:
flatten : {n : ℕ} → Decomp n → ℕ
flatten zero = zero
flatten (2* p) = suc (flatten p)
flatten (2* p +1 = suc (flatten p)
And now it's trivial to define f:
f : ℕ → ℕ
f n = flatten (decomp n)
This happily passes the termination checker with no trouble, because we're never actually recursing on the problematic n / 2. Instead, we convert the number into a format that directly represents its path through the number space in a structurally recursive way.
Edit It occurred to me only a little while ago that Decomp is a little-endian representation of binary numbers. 2*_ is "append 0 to the end/shift left 1 bit" and 2*_+1 is "append 1 to the end/shift left 1 bit and add one". So the above code is really about showing that binary numbers are structurally recursive wrt dividing by 2, which they ought to be! That makes it much easier to understand, I think, but I don't want to change what I wrote already, so we could instead do some renaming here: Decomp ~> Binary, 2*_ ~> _,zero, 2*_+1 ~> _,one, decomp ~> natToBin, flatten ~> countBits.
After accepting Vitus' answer, I discovered a different way to accomplish the goal of proving a function terminates in Agda, namely using "sized types." I am providing my answer here because it seems acceptable, and also for critique of any weak points of this answer.
Sized types are described:
http://arxiv.org/pdf/1012.4896.pdf
They are implemented in Agda, not only MiniAgda; see here: http://www2.tcs.ifi.lmu.de/~abel/talkAIM2008Sendai.pdf.
The idea is to augment the data type with a size that allows the typechecker to more easily prove termination. Size is defined in the standard library.
open import Size
We define sized natural numbers:
data Nat : {i : Size} \to Set where
zero : {i : Size} \to Nat {\up i}
succ : {i : Size} \to Nat {i} \to Nat {\up i}
Next, we define predecessor and subtraction (monus):
pred : {i : Size} → Nat {i} → Nat {i}
pred .{↑ i} (zero {i}) = zero {i}
pred .{↑ i} (succ {i} n) = n
sub : {i : Size} → Nat {i} → Nat {∞} → Nat {i}
sub .{↑ i} (zero {i}) n = zero {i}
sub .{↑ i} (succ {i} m) zero = succ {i} m
sub .{↑ i} (succ {i} m) (succ n) = sub {i} m n
Now, we may define division via Euclid's algorithm:
div : {i : Size} → Nat {i} → Nat → Nat {i}
div .{↑ i} (zero {i}) n = zero {i}
div .{↑ i} (succ {i} m) n = succ {i} (div {i} (sub {i} m n) n)
data ⊥ : Set where
record ⊤ : Set where
notZero : Nat → Set
notZero zero = ⊥
notZero _ = ⊤
We give division for nonzero denominators.
If the denominator is nonzero, then it is of the form, b+1. We then do
divPos a (b+1) = div a b
Since div a b returns ceiling (a/(b+1)).
divPos : {i : Size} → Nat {i} → (m : Nat) → (notZero m) → Nat {i}
divPos a (succ b) p = div a b
divPos a zero ()
As auxiliary:
div2 : {i : Size} → Nat {i} → Nat {i}
div2 n = divPos n (succ (succ zero)) (record {})
Now we can define a divide and conquer method for computing the n-th Fibonacci number.
fibd : {i : Size} → Nat {i} → Nat
fibd zero = zero
fibd (succ zero) = succ zero
fibd (succ (succ zero)) = succ zero
fibd (succ n) with even (succ n)
fibd .{↑ i} (succ {i} n) | true =
let
-- When m=n+1, the input, is even, we set k = m/2
-- Note, ceil(m/2) = ceil(n/2)
k = div2 {i} n
fib[k-1] = fibd {i} (pred {i} k)
fib[k] = fibd {i} k
fib[k+1] = fib[k-1] + fib[k]
in
(fib[k+1] * fib[k]) + (fib[k] * fib[k-1])
fibd .{↑ i} (succ {i} n) | false =
let
-- When m=n+1, the input, is odd, we set k = n/2 = (m-1)/2.
k = div2 {i} n
fib[k-1] = fibd {i} (pred {i} k)
fib[k] = fibd {i} k
fib[k+1] = fib[k-1] + fib[k]
in
(fib[k+1] * fib[k+1]) + (fib[k] * fib[k])
You cannot do this directly: Agda's termination checker only considers recursion ok on arguments that are syntactically smaller. However, the Agda standard library provides a few modules for proving termination using a well-founded order between the arguments of the functions. The standard order on natural numbers is such an order and can be used here.
Using the code in Induction.*, you can write your function as follows:
open import Data.Nat
open import Induction.WellFounded
open import Induction.Nat
s≤′s : ∀ {n m} → n ≤′ m → suc n ≤′ suc m
s≤′s ≤′-refl = ≤′-refl
s≤′s (≤′-step lt) = ≤′-step (s≤′s lt)
proof : ∀ n → ⌊ n /2⌋ ≤′ n
proof 0 = ≤′-refl
proof 1 = ≤′-step (proof zero)
proof (suc (suc n)) = ≤′-step (s≤′s (proof n))
f : ℕ → ℕ
f = <-rec (λ _ → ℕ) helper
where
helper : (n : ℕ) → (∀ y → y <′ n → ℕ) → ℕ
helper 0 rec = 0
helper (suc n) rec = rec ⌊ n /2⌋ (s≤′s (proof n))
I found an article with some explanation here. But there may be better references out there.
A similar question appeared on the Agda mailing-list a few weeks ago and the consensus seemed to be to inject the Data.Nat element into Data.Bin and then use structural recursion on this representation which is well-suited for the job at hand.
You can find the whole thread here : http://comments.gmane.org/gmane.comp.lang.agda/5690
You can avoid using well-founded recursion. Let's say you want a function, that applies ⌊_/2⌋ to a number, until it reaches 0, and collects the results. With the {-# TERMINATING #-} pragma it can be defined like this:
{-# TERMINATING #-}
⌊_/2⌋s : ℕ -> List ℕ
⌊_/2⌋s 0 = []
⌊_/2⌋s n = n ∷ ⌊ ⌊ n /2⌋ /2⌋s
The second clause is equivalent to
⌊_/2⌋s n = n ∷ ⌊ n ∸ (n ∸ ⌊ n /2⌋) /2⌋s
It's possible to make ⌊_/2⌋s structurally recursive by inlining this substraction:
⌊_/2⌋s : ℕ -> List ℕ
⌊_/2⌋s = go 0 where
go : ℕ -> ℕ -> List ℕ
go _ 0 = []
go 0 (suc n) = suc n ∷ go (n ∸ ⌈ n /2⌉) n
go (suc i) (suc n) = go i n
go (n ∸ ⌈ n /2⌉) n is a simplified version of go (suc n ∸ ⌊ suc n /2⌋ ∸ 1) n
Some tests:
test-5 : ⌊ 5 /2⌋s ≡ 5 ∷ 2 ∷ 1 ∷ []
test-5 = refl
test-25 : ⌊ 25 /2⌋s ≡ 25 ∷ 12 ∷ 6 ∷ 3 ∷ 1 ∷ []
test-25 = refl
Now let's say you want a function, that applies ⌊_/2⌋ to a number, until it reaches 0, and sums the results. It's simply
⌊_/2⌋sum : ℕ -> ℕ
⌊ n /2⌋sum = go ⌊ n /2⌋s where
go : List ℕ -> ℕ
go [] = 0
go (n ∷ ns) = n + go ns
So we can just run our recursion on a list, that contains values, produced by the ⌊_/2⌋s function.
More concise version is
⌊ n /2⌋sum = foldr _+_ 0 ⌊ n /2⌋s
And back to the well-foundness.
open import Function
open import Relation.Nullary
open import Relation.Binary
open import Induction.WellFounded
open import Induction.Nat
calls : ∀ {a b ℓ} {A : Set a} {_<_ : Rel A ℓ} {guarded : A -> Set b}
-> (f : A -> A)
-> Well-founded _<_
-> (∀ {x} -> guarded x -> f x < x)
-> (∀ x -> Dec (guarded x))
-> A
-> List A
calls {A = A} {_<_} f wf smaller dec-guarded x = go (wf x) where
go : ∀ {x} -> Acc _<_ x -> List A
go {x} (acc r) with dec-guarded x
... | no _ = []
... | yes g = x ∷ go (r (f x) (smaller g))
This function does the same as the ⌊_/2⌋s function, i.e. produces values for recursive calls, but for any function, that satisfies certain conditions.
Look at the definition of go. If x is not guarded, then return []. Otherwise prepend x and call go on f x (we could write go {x = f x} ...), which is structurally smaller.
We can redefine ⌊_/2⌋s in terms of calls:
⌊_/2⌋s : ℕ -> List ℕ
⌊_/2⌋s = calls {guarded = ?} ⌊_/2⌋ ? ? ?
⌊ n /2⌋s returns [], only when n is 0, so guarded = λ n -> n > 0.
Our well-founded relation is based on _<′_ and defined in the Induction.Nat module as <-well-founded.
So we have
⌊_/2⌋s = calls {guarded = λ n -> n > 0} ⌊_/2⌋ <-well-founded {!!} {!!}
The type of the next hole is {x : ℕ} → x > 0 → ⌊ x /2⌋ <′ x
We can easily prove this proposition:
open import Data.Nat.Properties
suc-⌊/2⌋-≤′ : ∀ n -> ⌊ suc n /2⌋ ≤′ n
suc-⌊/2⌋-≤′ 0 = ≤′-refl
suc-⌊/2⌋-≤′ (suc n) = s≤′s (⌊n/2⌋≤′n n)
>0-⌊/2⌋-<′ : ∀ {n} -> n > 0 -> ⌊ n /2⌋ <′ n
>0-⌊/2⌋-<′ {suc n} (s≤s z≤n) = s≤′s (suc-⌊/2⌋-≤′ n)
The type of the last hole is (x : ℕ) → Dec (x > 0), we can fill it by _≤?_ 1.
And the final definition is
⌊_/2⌋s : ℕ -> List ℕ
⌊_/2⌋s = calls ⌊_/2⌋ <-well-founded >0-⌊/2⌋-<′ (_≤?_ 1)
Now you can recurse on a list, produced by ⌊_/2⌋s, without any termination issues.
I encountered this sort of problem when trying to write a quick sort function in Agda.
While other answers seem to explain the problem and solutions more generally, coming from a CS background, I think the following wording would be more accessible for certain readers:
The problem of working with the Agda termination checker comes down to how we can internalize the termination checking process.
Suppose we want to define a function
func : Some-Recursively-Defined-Type → A
func non-recursive-case = some-a
func (recursive-case n) = some-other-func (func (f n)) (func (g n)) ...
In many of the cases, we the writers know f n and g n are going to be smaller than recursive-case n. Furthermore, it is not like the proofs for these being smaller are super difficult. The problem is more about how we can communicate this knowledge to Agda.
It turns out we can do this by adding a timer argument to the definition.
Timer : Type
Timer = Nat
measure : Some-Recursively-Defined-Type → Timer
-- this function returns an upper-bound of how many steps left to terminate
-- the estimate should be tight enough for the non-recursive cases that
-- given those estimates,
-- pattern matching on recursive cases is obviously impossible
measure = {! !}
func-aux :
(timer : Timer) -- the timer,
(actual-arguments : Some-Recursively-Defined-Type)
(timer-bounding : measure actual-arguments ≤ timer)
→ A
func-aux zero non-recursive-case prf = a
-- the prf should force args to only pattern match to the non-recursive cases
func-aux (succ t) non-recursive-case prf = a
func-aux (succ t) (recursive-case n) prf =
some-other-func (func-aux t (f n) prf') (func-aux t (g n) prf'') ... where
prf' : measure (f n) ≤ t
prf' = {! !}
prf'' : measure (g n) ≤ t
prf'' = {! !}
With these at hand, we can define the function we want as something like the following :
func : Some-Recursively-Defined-Type → A
func x with measure x
func x | n = func-aux n x (≤-is-reflexive n)
Caveat
I have not taken into account anything about whether the computation would be efficient.
While Timer type is not restricted to be Nat (but for all types with which we have a strong enough order relation to work with), I think it is pretty safe to say we don't gain much even if we consider such generality.

Resources