Isabelle: Predecessor function - isabelle

I am not sure, but I think sometimes my proofs would be easier if I had a predecessor function, e.g., in case a variable is known not to be zero.
I don't know a good example, but perhaps here: { fix n have "(n::nat) > 0 ⟹ (∑i<n. f i) = Predecessor n" sorry }
Possibly because it is not a good idea, there is no predecessor function in the library.
Is there a way to simulate a predecessor function or similar?
I have thought of this example:
theorem dummy:
shows "1=1" (* dummy *)
proof-
(* Predecessor function *)
def pred == "λnum::nat. (∑i∈{ i . Suc i = num}. i)"
{fix n :: nat
from pred_def have "n>0 ⟹ Suc (pred n) = n"
apply(induct n)
by simp_all
}
show ?thesis sorry
qed

Your definition is unnecessarily complicated. Why do you not just write
def pred ≡ "λn::nat. n - 1"
Then you can have
have [simp]: "⋀n. n > 0 ⟹ Suc (pred n) = n" by (simp add: pred_def)
In the case of 0, the pred function then simply returns 0 and Suc (pred 0) = 0 obviously doesn't hold. You could also define pred ≡ "λn. THE n'. Suc n' = n". That would return the unique natural number whose successor is n if such a number exists (i.e. if n > 0) and undefined (i.e. some natural number you know nothing about) otherwise. However, I would argue that in this case, it is much easier and sensible to just do pred ≡ λn::nat. n - 1.
I would suspect that in most cases, you can simply forgo the pred function and write n - 1; however, I do know that it is sometimes good to have the - 1 “protected” by a definition. In these cases, I usually def a variable n' as n - 1 and prove Suc n' = n – basically the same thing. In my opinion, seeing as proving this takes only one line, it does not really merit a definition of its own, such as this pred function, but one could make a reasonable case for it, I guess.
Another thing: I've noticed you use lemma "1 = 1" as some kind of dummy environment to do Isar proofs in. I would like to point out the existence of notepad, which exists precisely for that use case and that can be used as follows:
notepad
begin
have "some fact" by something
end

Related

Two Coq Problems SOS: one is about IndProp, another has something to do with recursion, I guess

(* 4. Let oddn and evenn be the predicates that test whether a given number
is odd or even. Show that the sum of an odd number with an even number is odd. *)
Inductive oddn : nat -> Prop :=
| odd1 : oddn 1
| odd2 : forall n, oddn n -> oddn (S (S n)).
Inductive evenn : nat -> Prop :=
| even1 : evenn 0
| even2 : forall n, evenn n -> evenn (S (S n)).
Theorem odd_add : forall n m, oddn n -> evenn m -> oddn (n + m).
Proof. intros. destruct m.
+ Search add. rewrite <- plus_n_O. apply H.
+ destruct H.
++ simpl. apply odd2.
I don't know how can I prove this theorem, since I can not link oddn with evenn.
(* 6. We call a natural number good if the sum of all
its digits is divisible by 5. For example 122 is good
but 93 is not. Define a function count such that
(count n) returns the number of good numbers smaller than
or equal to n. Here we assume that 0 <= n < 10000.
Hint: You may find the "let ... in" struct useful. You may
directly use the div and modulo functions defined in the
standard library of Coq. *)
Definition isGood(n:nat) : bool :=
Fixpoint count (n : nat) : nat :=
match n with
| 0 => 1
| S n' => if isGood n then 1 + count n'
else count n'
end.
Compute count 15.
Example count_test1 : count 15 = 3.
Proof. reflexivity. Qed.
Example count_test2 : count 2005 = 401.
Proof. reflexivity. Qed.
For the second problem, I got stuck because the recursion I defined won't be accepted by Coq(non-decreasing?).
I just got stuck with these two problems, can anyone work them out?
If you want to define independently oddnand even, you may prove a lemma which relates these two predicates, like:
Remark R : forall n, (evenn n <-> oddn (S n)) /\
(oddn n <-> evenn (S n)).
(* proof by induction on n *)
Then, it's easy to apply this remark for solving your first exercise.
Please note that you may define even and odd in several other ways:
as mutually inductive predicates
with existential quantifiers
define even, then oddin function of even
...
I don't understand the problem with the second exercise.
A few days ago, we discussed about a function sum_digits you can use (with modulo) to define isGood.
Your function count looks OK, but quite inefficient (with Peano natural numbers).

Why is this simple proof with real numbers not proven by auto (or even sledgehammer)

What am I doing wrong/ forgetting that this can't be automatically proven by isabelle:
lemma "sqrt 5 = (1 + sqrt 5) / 2 - (1 - sqrt 5) / 2"
apply auto
sorry
This is what I actually want to prove:
fun fib :: "nat ⇒ nat" where
"fib (Suc 0) = Suc 0"
| "fib (Suc (Suc 0)) = Suc 0"
| "fib (Suc n) = (fib (n - 1)) + (fib (n - 2))"
lemma "(n::nat) > 0 ⟹ fib n = (1 / sqrt 5) * (
(((1 + sqrt 5) / 2)^n) -
(((1 - sqrt 5) / 2)^n)
)"
proof(induction n)
case 0
then show ?case
by auto
next
case (Suc n)
then show ?case
proof(induction n)
case 0
then show ?case
apply(auto)
sorry
next
case (Suc n)
then show ?case
apply auto
sorry
qed
qed
Sledgehammer usually isn't good at doing heavy arithmetic rewriting, which is required here. Proof methods that use the simplifier, like auto and simp, are good at this. But you have to give them the right rules.
There are a number of theorem collections for this:
algebra_simps, which normalises a term w.r.t. associativity, commutativity, and distributivity (essentially sorting everything in ascending order and multiplying out)
field_simps, which additionally cross-multiplies fractions. But caution: the denominator must be provably non-zero. If the simplifier cannot show that, it will not cross-multiply. If the denominator is a product of several factors, it may multiply out the product first and then the denominator is so ugly that it cannot show non-zeroness anymore.
divide_simps is like field_simps except that it does not use distributivity, and it always cross-multiplies by introducing a case distinction for the case that the denominator is zero. (In Isabelle/HOL, x / 0 = 0 holds by definition.)
Another rule you will need here is power2_eq_square to rewrite the square into a multiplication. Then you get:
lemma "sqrt 5 = (1 + sqrt 5) / 2 - ((1 - sqrt 5) / 2)^2"
apply (auto simp: field_simps power2_eq_square)
proof (prove)
goal (1 subgoal):
1. False
So something is not quite right here. As NieDzejkob pointed out in the comment, the left-hand side should be sqrt 5 - 1. With that, the proof goes through.
By the way, in case you're not aware, Fibonacci numbers are in the Isabelle/HOL standard library: https://isabelle.in.tum.de/library/HOL/HOL-Number_Theory/Fib.html

How to Prove Commutative Property of Maximum in Isabelle

I'm extremely new to Isabelle so please have mercy. How can I prove the commutative property of maximum with this function?
fun max :: "nat => nat => nat" where
"max 0 0 = 0" |
"max (Suc x) 0 = Suc x" |
"max 0 (Suc x) = Suc x" |
"max (Suc x) (Suc y) = Suc (max x y)"
lemma "max x y = max y x"
? ? ?
I know that it can be easily proven for
definition max :: "nat ⇒ nat ⇒ nat" where
"max x y = (if x ≥ y then x else y)"
lemma "max x y = max y x"
apply(simp add:max_def)
done
This is not a homework assignment. I'm genuinely curious and would love to understand as much about Isabelle and mathematical proof as possible. Thanks for your time.
The typical way to prove some fact about a recursively-defined function is by induction, where the structure of the induction follows the structure of the recursive definition.
In Isabelle, you can do induction with the induct method. If you write induct n for a natural number n, you will get two cases: the case where n = 0 and the case where n is the successor of something.
In this case, you should, however, use the induction rule provided for max by the function package, which is called max.induct. So, just do apply (induction x y rule: max.induct) on your goal and see what you are left with afterwards. This is sufficient for what you want to prove.
However, you already mentioned the alternative definition if x ≥ y then x else y. Some proofs (like associativity of max) are probably easier with that definition. In such cases, you can simply proof this alternative definition as
lemma max_altdef: "max x y = (if x ≥ y then x else y)"
and then use whichever definition is more convenient for you in every situation. The proof of max_altdef is also a simple induction.

Free type variables in proof by induction

While trying to prove lemmas about functions in continuation-passing style by induction I have come across a problem with free type variables. In my induction hypothesis, the continuation is a schematic variable but its type involves a free type variable. As a result Isabelle is not able to unify the type variable with a concrete type when I try to apply the i.h. I have cooked up this minimal example:
fun add_k :: "nat ⇒ nat ⇒ (nat ⇒ 'a) ⇒ 'a" where
"add_k 0 m k = k m" |
"add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))"
lemma add_k_cps: "∀k. add_k n m k = k (add_k n m id)"
proof(rule, induction n)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))" by simp
also have "… = k (Suc (add_k n m id))"
using Suc[where k="(λn'. k (Suc n'))"] by metis
also have "… = k (add_k n m (λn'. Suc n'))"
using Suc[where k="(λn'. Suc n')"] sorry (* Type unification failed *)
also have "… = k (add_k (Suc n) m id)" by simp
finally show ?case .
qed
In the "sorry step", the explicit instantiation of the schematic variable ?k fails with
Type unification failed
Failed to meet type constraint:
Term: Suc :: nat ⇒ nat
Type: nat ⇒ 'a
since 'a is free and not schematic. Without the instantiation the simplifier fails anyway and I couldn't find other methods that would work.
Since I cannot quantify over types, I don't see any way how to make 'a schematic inside the proof. When a term variable becomes schematic locally inside a proof, why isn't this the case with variables in its type too? After the lemma has been proved, they become schematic at the theory level anyway. This seems quite limiting. Could an option to do this be implemented in the future or is there some inherent limitation? Alternatively, is there an approach to avoid this problem and still keeping the continuation schematically polymorphic in the proven lemma?
Free type variables become schematic in a theorem when the theorem is exported from the block in which the type variables have been fixed. In particular, you cannot quantify over type variables in a block and then instantiate the type variable within the block, as you are trying to do in your induction. Arbitrary quantification over types leads to inconsistencies in HOL, so there is little hope that this could be changed.
Fortunately, there is a way to prove your lemma in CPS style without type quantification. The problem is that your statement is not general enough, because it contains id. If you generalise it, then the proof works:
lemma add_k_cps: "add_k n m (k ∘ f) = k (add_k n m f)"
proof(induction n arbitrary: f)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m (k ∘ f) = add_k n m (k ∘ (λn'. f (Suc n')))" by(simp add: o_def)
also have "… = k (add_k n m (λn'. f (Suc n')))"
using Suc.IH[where f="(λn'. f (Suc n'))"] by metis
also have "… = k (add_k (Suc n) m f)" by simp
finally show ?case .
qed
You get your original theorem back, if you choose f = id.
This is an inherent limitation how induction works in HOL. Induction is a rule in HOL, so it is not possible to generalize any types in the induction hypothesis.
A specialized solution for your problem is to first prove
lemma add_k_cps_nat: "add_k n m k = k (n + m)"
by (induction n arbitrary: m k) auto
and then prove add_k_cps.
A general approach is: prove instances for fixed types first, for which the induction works. In the example case is is an induction by nat. And then derive a proof generalized in the type itself.

Ignoring a case to prove a goal through elimination

I have the following lemma to show the derivative of f at x is D.
lemma lm1:
assumes "(∀h. (f (x + h) - f x) = D*h)"
shows "DERIV f x :> D"
proof cases
assume notzero: "∀h. h ≠ 0"
have cs1: "(λh. (f (x + h) - f x) / h) -- 0 --> D" using assms notzero by auto
from this DERIV_def show ?thesis by auto
From the assumptions, I can easily prove the lemma by taking the limit then using DERIV_def. For this I have to assume that h ≠ 0. Continuing with the proof by cases I have to show that even when h = 0, the goal is true, however this can't be done when h = 0 as the assumption becomes 0 = 0. The lemma becomes trivial.
Is there a way I can prove the goal, which is this case is that f has derivative D at x, without the additional assumption that h ≠ 0?
edit: After further research, I came across the use of elimination rules in Isabelle which may be helpful. Also, I understand that the lemma is correct as the if the function is continuous, then the derivative at 0 also exists.
I have been searching for the correct use and implementation of the the above information. How can I improve my search, and where should I be looking?

Resources