Need hints about proving some intuitionistic logic statements - functional-programming

I'm new to Agda, and I'm new to dependently typed programming and proof assistants in general. I decided to get myself started by constructing simple intuitionistic logic proofs, using the definitions I found in Programming Language Foundations in Agda, and I had some success. However, I got confused when I tried to write the following proof:
∨-identity-indirect : {A B : Set} → (¬ A) ∧ (A ∨ B) → B
Proving this on paper would be fairly simple: expanding ¬ A, we have A → ⊥. So this statement becomes equivalent to (⊥ ∨ B) → B, which is obviously true.
I was able to successfully prove the latter part, that is, (⊥ ∨ B) → B:
∨-identity : {A : Set} → (⊥ ∨ A) → A
∨-identity (∨-left ())
∨-identity (∨-right A) = A
Then, I was able to write:
∨-identity-indirect ⟨ ¬A , A∨B ⟩ = ∨-identity ?
Suggesting me that I need to produce ⊥ ∨ B by having ¬A and A ∨ B. I'd like to somehow replace A in A ∨ B with ¬A A, but I don't think there's a way of doing so.
When trying to apply the ∨-identity case analysis pattern to ∨-identity-indirect, I get an error message that A should be empty, but that's not obvious to me - I assume I need to somehow make this obvious to Agda, by making use of ¬A.
Am I on the right track, or am I getting this wrong completely? How should I go about writing this ∨-identity-indirect function?

Suggesting me that I need to produce ⊥ ∨ B by having ¬A and A ∨ B. I'd like to somehow replace A in A ∨ B with ¬A A, but I don't think there's a way of doing so.
When trying to apply the ∨-identity case analysis pattern to ∨-identity-indirect, I get an error message that A should be empty, but that's not obvious to me - I assume I need to somehow make this obvious to Agda, by making use of ¬A.
You're probably trying to pattern match on a value of type ¬ A with (), which doesn't work, because ¬ A expands to A -> ⊥, i.e. it's a function that will only return you a ⊥ after you give it some A. Here is how you do that:
replace-A : {A B : Set} → (¬ A) → (A ∨ B) → ⊥ ∨ B
replace-A f (v-left x) = v-left (f x)
replace-A _ (v-right y) = v-right y
Having that, ∨-identity-indirect is straightforward:
∨-identity-indirect : {A B : Set} → (¬ A) ∧ (A ∨ B) → B
∨-identity-indirect ⟨ ¬A , A∨B ⟩ = ∨-identity (replace-A ¬A A∨B)

Related

Reification for interpretation functions which use different interpretation functions below quantifiers/lambdas

I'm currently trying use Isabelle/HOL's reification tactic. I'm unable to use different interpretation functions below quantifiers/lambdas. The below MWE illustrates this. The important part is the definition of the form function, where the ter call occurs below the ∀. When trying to use the reify tactic I get an Cannot find the atoms equation error. I don't get this error for interpretation functions which only call themselves under quantifiers.
I can't really reformulate my problem to avoid this. Does anybody know how to get reify working for such cases?
theory MWE
imports
"HOL-Library.Reflection"
begin
datatype Ter = V nat | P Ter Ter
datatype Form = All0 Ter
fun ter :: "Ter ⇒ nat list ⇒ nat"
where "ter (V n) vs = vs ! n"
| "ter (P t1 t2) vs = ter t1 vs + ter t2 vs"
fun form :: "Form ⇒ nat list ⇒ bool"
where "form (All0 t) vs = (∀ v . ter t (v#vs) = 0)" (* use of different interpretation function below quantifier *)
(*
I would expect this to reify to:
form (All0 (P (V 0) (V 0))) []
instead I get an error :-(
*)
lemma "∀ n :: nat . n + n = 0"
apply (reify ter.simps form.simps)
(* proof (prove)
goal (1 subgoal):
1. ∀n. n + n = n + n
Cannot find the atoms equation *)
oops
(* As a side note: the following example in src/HOL/ex/Reflection_Examples.thy (line 448, Isabelle2022) seems to be broken? For me, the reify invocation
doesn't change the goal at all. It uses quantifiers too, but only calls the same interpretation function under quantifiers and also doesn't throw an error,
so at least for me this seems to be unrelated to my problem.
*)
(*
lemma " ∀x. ∃n. ((Suc n) * length (([(3::int) * x + f t * y - 9 + (- z)] # []) # xs) = length xs) ∧ m < 5*n - length (xs # [2,3,4,x*z + 8 - y]) ⟶ (∃p. ∀q. p ∧ q ⟶ r)"
apply (reify Irifm.simps Irnat_simps Irlist.simps Irint_simps)
oops
*)
end

Prove recursive function exists using only `nat_ind`

I'm trying to prove the following in Coq:
∀ B: Type, ∀ a: B, ∀ b: nat -> B -> B, ∃ f: nat -> B, f 0 = a ∧ ∀ n: nat, f (S n) = b n (f n).
Which implies that a fairly general class of recursive functions exist. I know that I can construct that function using Fixpoint items or fix expressions, but I want to not use it, and instead use nat_ind defined with this type:
∀ P: nat → Prop, P 0 → (∀ n: nat, P n → P (S n)) → ∀ n: nat, P n
I believe this is possible since nat_ind behaves like a recursion combinator. But I didn't figured it out how to prove it. The problem is that the induction variable is inside of ∃ f guard, and I don't have access to it. I'm able to prove something like this:
∀ B: Type, ∀ a: B, ∀ b: nat -> B -> B, ∀ m: nat,
∃ f: nat -> B, f 0 = a ∧ ∀ n: nat, n < m -> f (S n) = b n (f n)
But it doesn't help in proving the original one I think.
Is it possible to prove the original one without using fix directly? I'm ok with using double negation and other well-known axioms if needed. Using nat_rec and nat_rect is also fine, but only as an opaque axiom. Precisely, using those are fine:
Axiom nat_rec2: ∀ P : nat → Set, P 0 → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n.
Axiom nat_rect2: ∀ P : nat → Type, P 0 → (∀ n : nat, P n → P (S n)) → ∀ n : nat, P n.
The problem seems to be to obtain recursion from the following axiomatization of nat:
Parameter nat : Type.
Parameter O : nat.
Parameter S : nat -> nat.
Parameter disjoint_O_S : forall n, O <> S n.
Parameter injective_S : forall n n', S n = S n' -> n = n'.
Parameter nat_rect : forall P: nat -> Type, P O -> (forall n: nat, P n -> P (S n)) -> forall n : nat, P n.
Where the main issue is that the nat_rect axiom has no computational behavior, so although we might define a recursor B -> (nat -> B -> B) -> nat -> B as nat_rect (fun _ => B), we can't prove anything about it.
The solution is to first encode the graph of the desired recursive function f as a relation, and then use nat_rect to produce a dependent pair, of a value that is going to be f n with evidence that that value is in the graph of f.
Section Rec.
Context (B : Type) (a : B) (b : nat -> B -> B).
Inductive graph : nat -> B -> Prop :=
| recO : graph O a
| recS n y : graph n y -> graph (S n) (b n y)
.
Lemma graph_fun : forall n, { y | forall y', y = y' <-> graph n y' }.
Proof.
induction n as [ | n IH ] using nat_rect.
- exists a; split.
+ intros <-. constructor.
+ inversion 1; [ reflexivity | ]. contradiction (disjoint_O_S n); auto.
- destruct IH as [y IH]. exists (b n y); split.
+ intros <-. constructor. apply IH. auto.
+ inversion 1; subst. contradiction (disjoint_O_S n); auto.
apply injective_S in H0. subst.
apply IH in H1. subst; auto.
Qed.
Theorem nat_rec : exists (f : nat -> B), f O = a /\ forall n, f (S n) = b n (f n).
Proof.
exists (fun n => proj1_sig (graph_fun n)). split.
- apply (proj2_sig (graph_fun O)). constructor.
- intros n. apply (proj2_sig (graph_fun (S n))).
constructor. apply (proj2_sig (graph_fun n)).
reflexivity.
Qed.
End Rec.
If you have the Prop inductor nat_ind instead of nat_rect, that same technique can be adapted by also assuming the axiom constructive_indefinite_description (which actually lets you reconstruct nat_rect, but here you can more simply apply it at the beginning of graph_fun):
From Coq Require Import IndefiniteDescription.
About constructive_indefinite_description.
(*
constructive_indefinite_description :
forall (A : Type) (P : A->Prop),
(exists x, P x) -> { x : A | P x }.
*)

Defining a function to a subset of the codomain

I am trying to define the image restriction of a function
f : A → B as f': A → f[A], where f'(a) = f(a) . However, I am not sure how to define it in a lean.
In my opinion, the most intuitive way to define it is:
def fun_to_image {A B: Type*} (f: A → B): A → image f set.univ :=
λ a, f a
However, this gets rejected because (f a) is of type B not (image f set.univ).
I even tried proving that f(a) ∈ (image f univ) . It didn't help:
def fun_to_image (f : A → B) : A → image f univ :=
λ a ,
have h : f a ∈ image f univ :=
exists.intro a
(and.intro trivial (eq.refl (f a))),
f a
The error message is:
type mismatch, term
λ (a : A), f a
has type
A → B
but is expected to have type
A → ↥(f '' univ)
set.univ and image are defined as follows in data.set
def univ : set α :=
λ a, true
def image (f : α → β) (s : set α) : set β :=
{b | ∃ a, a ∈ s ∧ f a = b}
Any idea how this can be done?
You are almost there (-;
There is a little “warning sign” in the error message.
but is expected to have type
A → ↥(f '' univ)
You can see the creepy up-arrow ↥. Let me explain what it means:
As you have recalled, image f set.univ is defined as a subset. Since you are treating it as a type, it is automatically coerced into a so-called subtype: if s : set X, then the corresponding subtype s has terms of the form ⟨x, h⟩ (type these as \< and \> in VScode), where x : X and h : x ∈ s.
This “coercion to type” is indicated by the ↥.
So, to finish your definition, you will have to write ⟨f a, h⟩, instead of f a.
Note that in main library there is also a definition of range (here) which is meant to be used in place of image _ set.univ.
It already comes with (L1167)
def range_factorization (f : ι → β) : ι → range f :=
λ i, ⟨f i, mem_range_self i⟩

Proving existence of an infinite path in Isabelle

Consider the following inductive predicate:
inductive terminating where
"(⋀ s'. s → s' ⟹ terminating s') ⟹ terminating s"
I would like to prove that if a node s is not terminating then there exists an infinite chain of the form s0 → s1 → s2 → .... Something among the lines of:
lemma "¬ terminating (c,s) ⟹
∃ cfs. (cfs 0 = (c,s) ∧ (∀ n. (cfs n) → (cfs (n+1))))"
How can I prove this in Isabelle?
Edit
The final goal is to prove the following goal:
lemma "(∀s t. (c, s) ⇒ t = (c', s) ⇒ t) ⟹
terminating (c, s) = terminating (c', s) "
where ⇒ is the big step semantics of the GCL. Perhaps another method is needed to prove this theorem.
If you are comfortable using the choice operator, you can easily construct a witness using SOME, for instance:
primrec infinite_trace :: ‹'s ⇒ nat ⇒ 's› where
‹infinite_trace c0 0 = c0›
| ‹infinite_trace c0 (Suc n) =
(SOME c. infinite_trace c0 n → c ∧ ¬ terminating c)›
(I was not sure about the types of your s and (c,s) values—so I just used 's for that.)
Obviously, the witness construction would fail if at some point SOME cannot pick a value satisfying the constraint. So, one still has to prove that the non-termination indeed propagates (as is quite obvious from the definition):
lemma terminating_suc:
assumes ‹¬ terminating c›
obtains c' where ‹c → c'› ‹¬ terminating c'›
using assms terminating.intros by blast
lemma nontermination_implies_infinite_trace:
assumes ‹¬ terminating c0›
shows ‹¬ terminating (infinite_trace c0 n)
∧ infinite_trace c0 n → infinite_trace c0 (Suc n)›
by (induct n,
(simp, metis (mono_tags, lifting) terminating_suc assms exE_some)+)
Proving your existential quantification using infinite_trace (c,s) as a witness is straight-forward.

Agda Programming- Proving Insertionsort makes 3 or less comparisons on a list of size 3

Good Evening Fellows,
I am attempting to prove that insertionsort will perform <= 3 comparisons in a list of size 3 while sorting. Last part of my project and cannot make any headway on it. After spending fair amount of time pursuing an incorrect approach, my instructor informed me it may be accomplished by writing a helper function to assist. I unfortunately have not come up with any piece of code to help. If anyone can offer advice or assistance, any and all are appreciated. Code follows. Thanks!
insert : ℕ → 𝕃 ℕ → 𝕃 ℕ × ℕ
insert x (h :: t) = if h < x then (x :: h :: t , 1) else let r = insert
x t in h :: (fst r) , 1 + snd r
insert x [] = x :: [] , 0
insertionsort : 𝕃 ℕ → 𝕃 ℕ × ℕ
insertionsort [] = [] , 0
insertionsort (h :: t) with insertionsort t
insertionsort (h :: t) | t' , c1 with insert h t'
insertionsort (h :: t) | t' , c1 | r , c2 = r , c1 + c2
exampleThm : ∀(x y z c : ℕ)(r : 𝕃 ℕ) → insertionsort (x :: y :: z :: [])
≡ r , c → c ≤ 3 ≡ tt
exampleThm x y z = ?`
All the comparisons to be done in the course of insertionsort are actually done in the course of subordinate calls to insert. It may help to establish a useful fact about the comparison cost of insert. If you can bound the cost of each call to insert, you should be able to combine those bounded partial costs together to make a bounded total cost. In case your instructor is concerned that I am helping too much, let me summarize by saying that all I am saying is that the structure of the proof has to follow the structure of the program.
A general pattern when constructing proofs is to generalize them to make them easier. In this case I believe it is more clear to solve the generalized bound for the number of comparisons that insertion sort will do and then instantiate that to your particular input.
The structure of your proof will follow the structure of your program.
First we'll need to characterize the behavior of insert, since insertion sort is implemented in terms of it.
insert-bound : ∀ x ys → proj₂ (insert x ys) ≤ length ys
Then we'll use that to characterize the behavior of insertion sort
bound : ℕ → ℕ
bound 0 = 0
bound (suc n) = bound n + n
insertionsort-bound : ∀ xs → proj₂ (insertionsort xs) ≤ bound (length xs)
Using the general solution we can solve the specific case of a three element list
exampleThm : ∀ x y z c r → insertionsort (x ∷ y ∷ z ∷ []) ≡ (r , c) → c ≤ 3
exampleThm x y z ._ ._ refl = insertionsort-bound (x ∷ y ∷ z ∷ [])
Here's an implementation against the Agda standard library of your problem:
http://www.galois.com/~emertens/insertionsort-agda/Insertionsort.html

Resources