PLCC book. page 23. Is it a misprint, and should sigma be replaced by sigma prime? - verifiable-c

In Programming Logics for Certified Compilers book, on page #23, in the expression :
(v ≠ 0 ∧ ∃σ' ∃h∃t. σ = h · σ' ∧ v.head->h ∗ v.next->t ∗ listrep σ (t, 0))
It seems to me, that, since σ represents the whole list v, and σ' represents tail, the last expression should be: listrep σ' (t, 0). Is that correct, and it's just a misprint in the book?

Yes, you are right; it should be sigma-prime.

Related

Reification for interpretation functions which use different interpretation functions below quantifiers/lambdas

I'm currently trying use Isabelle/HOL's reification tactic. I'm unable to use different interpretation functions below quantifiers/lambdas. The below MWE illustrates this. The important part is the definition of the form function, where the ter call occurs below the ∀. When trying to use the reify tactic I get an Cannot find the atoms equation error. I don't get this error for interpretation functions which only call themselves under quantifiers.
I can't really reformulate my problem to avoid this. Does anybody know how to get reify working for such cases?
theory MWE
imports
"HOL-Library.Reflection"
begin
datatype Ter = V nat | P Ter Ter
datatype Form = All0 Ter
fun ter :: "Ter ⇒ nat list ⇒ nat"
where "ter (V n) vs = vs ! n"
| "ter (P t1 t2) vs = ter t1 vs + ter t2 vs"
fun form :: "Form ⇒ nat list ⇒ bool"
where "form (All0 t) vs = (∀ v . ter t (v#vs) = 0)" (* use of different interpretation function below quantifier *)
(*
I would expect this to reify to:
form (All0 (P (V 0) (V 0))) []
instead I get an error :-(
*)
lemma "∀ n :: nat . n + n = 0"
apply (reify ter.simps form.simps)
(* proof (prove)
goal (1 subgoal):
1. ∀n. n + n = n + n
Cannot find the atoms equation *)
oops
(* As a side note: the following example in src/HOL/ex/Reflection_Examples.thy (line 448, Isabelle2022) seems to be broken? For me, the reify invocation
doesn't change the goal at all. It uses quantifiers too, but only calls the same interpretation function under quantifiers and also doesn't throw an error,
so at least for me this seems to be unrelated to my problem.
*)
(*
lemma " ∀x. ∃n. ((Suc n) * length (([(3::int) * x + f t * y - 9 + (- z)] # []) # xs) = length xs) ∧ m < 5*n - length (xs # [2,3,4,x*z + 8 - y]) ⟶ (∃p. ∀q. p ∧ q ⟶ r)"
apply (reify Irifm.simps Irnat_simps Irlist.simps Irint_simps)
oops
*)
end

How to proceed in Isabelle when the goal has implications and existentials?

I'm trying to write a proof in the Isabelle "structured style" and I'm not sure how to specify the value of existential variables. Specifically, I'm trying to expand the sorrys in this proof:
lemma division_theorem: "lt Zero n ⟹ ∃ q r. lt r n ∧ m = add (mul q n) r"
proof (induct m)
case Zero
then show ?case
by (metis add_zero_right mul.simps(1))
next
case (Suc m)
then show ?case
proof (cases "Suc r = n")
case True
then show ?thesis sorry
next
case False
then show ?thesis sorry
qed
qed
Zero, add, and mul are defined on a nat-like class that I made just for the purposes of writing simple number theory proofs, hopefully that is intuitive. I have done this in the "apply" style, so I'm familiar with how the proof is supposed to go, I'm just not understanding how to turn it into "structured" style.
So the goals generated by these cases are:
1. (lt Zero n ⟹ ∃q r. lt r n ∧ m = add (mul q n) r) ⟹
lt Zero n ⟹ cnat.Suc r = n ⟹ ∃q r. lt r n ∧ cnat.Suc m = add (mul q n) r
2. (lt Zero n ⟹ ∃q r. lt r n ∧ m = add (mul q n) r) ⟹
lt Zero n ⟹ cnat.Suc r ≠ n ⟹ ∃q r. lt r n ∧ cnat.Suc m = add (mul q n) r
At a high level, for that first goal, I want to grab the q and r from the first existential, specify q' = Suc q and r' = Zero for the second existential, and let sledgehammer bash out precisely what mix of arithmetic lemmas to use to prove that it works. And then do that same for q' = q and r' = Suc r for the second case.
How can I do this? I have tried various mixes of obtain, rule exI, but I feel like I'm not understanding some basic mechanism here. Using the apply style this works when I apply subgoal_tac but it seems like that is unlikely to be the ideal method of solution here.
As you can see in the two goals generated by the command cases "Suc r = n", the occurrences of variable r in both the expressions cnat.Suc r = n and cnat.Suc r ≠ n are actually free and thus not related to the existentially quantified formula whatsoever. In order to "grab" the q and r from the induction hypothesis you need to use the obtain command. As a side remark, I suggest to use the induction method instead of the induct method so you can refer to the induction hypothesis as Suc.IH instead of Suc.hyps. Once you "grab" q and r from the induction hypothesis, you just need to prove that
lt r' n, and that
Suc m = add (mul q' n) r'
with q' and r' as defined for each of your two cases. Here is a (slightly incomplete) proof of your division theorem:
lemma division_theorem: "lt Zero n ⟹ ∃ q r. lt r n ∧ m = add (mul q n) r"
proof (induction m)
case Zero
then show ?case
by (metis add_zero_right mul.simps(1))
next
case (Suc m)
(* "Grab" q and r from IH *)
from ‹lt Zero n› and Suc.IH obtain q and r where "lt r n ∧ m = add (mul q n) r"
by blast
show ?case
proof (cases "Suc r = n")
case True
(* In this case, we use q' = Suc q and r' = Zero as witnesses *)
from ‹Suc r = n› and ‹lt r n ∧ m = add (mul q n) r› have "Suc m = add (mul (Suc q) n) Zero"
using add_comm by auto
with ‹lt Zero n› show ?thesis
by blast
next
case False
(* In this case, we use q' = q and r' = Suc r as witnesses *)
from ‹lt r n ∧ m = add (mul q n) r› have "Suc m = add (mul q n) (Suc r)"
by simp
moreover have "lt (Suc r) n"
sorry (* left as exercise :) *)
ultimately show ?thesis
by blast
qed
qed

Why do I get this exception on an induction rule for a lemma?

I am trying to prove the following lemma (which is the meaning formula for the addition of two Binary numerals).
It goes like this :
lemma (in th2) addMeaningF_2: "∀m. m ≤ n ⟹ (m = (len x + len y) ⟹ (evalBinNum_1 (addBinNum x y) = plus (evalBinNum_1 x) (evalBinNum_1 y)))"
I am trying to perform strong induction. When I apply(induction n rule: less_induct) on the lemma, it throws an error.
exception THM 0 raised (line 755 of "drule.ML"):
infer_instantiate_types: type ?'a of variable ?a
cannot be unified with type 'b of term n
(⋀x. (⋀y. y < x ⟹ ?P y) ⟹ ?P x) ⟹ ?P ?a
Can anyone explain this?
Edit:
For more context
locale th2 = th1 +
fixes
plus :: "'a ⇒ 'a ⇒ 'a"
assumes
arith_1: "plus n zero = n"
and plus_suc: "plus n (suc m) = suc ( plus n m)"
len and evalBinNum_1 are both recursive functions
len gives us the length of a given binary numeral, while evalBinNum_1 evaluates binary numerals.
fun (in th2) evalBinNum_1 :: "BinNum ⇒ 'a"
where
"evalBinNum_1 Zero = zero"|
"evalBinNum_1 One = suc(zero)"|
"evalBinNum_1 (JoinZero x) = plus (evalBinNum_1 x) (evalBinNum_1 x)"|
"evalBinNum_1 (JoinOne x) = plus (plus (evalBinNum_1 x) (evalBinNum_1 x)) (suc zero)"
The problem is that Isabelle cannot infer the type of n (or the bound occurrence of m) when trying to use the induction rule less_induct. You might want to add a type annotation such as (n::nat) in your lemma. For the sake of generality, you might want to state that the type of n is an instance of the class wellorder, that is, (n::'a::wellorder). On another subject, I think there is a logical issue with your lemma statement: I guess you actually mean ∀m. m ≤ (n::nat) ⟶ ... ⟶ ... or, equivalently, ⋀m. m ≤ (n::nat) ⟹ ... ⟹ .... Finally, it would be good to know the context of your problem (e.g., there seems to be a locale th2 involved) for a more precise answer.

Why can't we do case analysis on inductively defined predicates *directly* when doing rule inversion?

There seems to be something about inductive predicates I don't understand since I keep getting issues with them. My most recent struggle is to understand case analysis with inductively defined predicates on ev from chapter 5 of the concrete semantics books.
Assume I am proving
lemma
shows "ev n ⟹ ev (n - 2)"
I've tried to start the proof immediately in Isabelle/HOL but it complains when I try or gives weird goals:
lemma
shows "ev n ⟹ ev (n - 2)"
proof (cases)
which shows:
proof (state)
goal (2 subgoals):
1. ⟦ev n; ?P⟧ ⟹ ev (n - 2)
2. ⟦ev n; ¬ ?P⟧ ⟹ ev (n - 2)
which is not what I expected.
When I pass n to cases we instead induct on the definition of natural numbers (note other times it does the induction on ev correctly, see later example):
lemma
shows "ev n ⟹ ev (n - 2)"
proof (cases n)
gives:
proof (state)
goal (2 subgoals):
1. ⟦ev n; n = 0⟧ ⟹ ev (n - 2)
2. ⋀nat. ⟦ev n; n = Suc nat⟧ ⟹ ev (n - 2)
which is not what I expected. Note however that the following DOES work (i.e. it inducts on ev not on natural numbers) even with n as a parameter:
lemma
shows "ev n ⟹ ev (n - 2)"
proof -
assume 0: "ev n"
from this show "ev (n - 2)"
proof (cases n)
I realize that there must be some magic about assuming ev n first and then stating to show ev (n-2) otherwise the error wouldn't occur.
I understand the idea of rule inversion (of arriving at a given fact to be proven in reverse, by analysis the cases that could have lead to it). For the "even" predicate the rule inversion is:
ev n ==> n = 0 ∨ (∃k. n = Suc (Suc k) ∧ ev k)
which makes sense based on the inductively defined predicate:
inductive ev :: "nat ⇒ bool" where
ev0: "ev 0" |
evSS: "ev n ⟹ ev (Suc (Suc n))"
but I don't understand. Why wouldn't directly doing the cases work? or why is this syntax invalid:
proof (cases ev)
or this:
proof (cases ev.case)
etc.
I think the crux is that at the heart I don't know when dealing with inductively define predicates if this induction is applied to the goal or the assumption, but from the writing on of the textbook:
The name rule inversion emphasizes that we are reasoning backwards: by which rules could some given fact have been proved?
I'd assume it's applying rule inversion to the goals since it says "by which rules could some given fact have been proved".
In addition, this example ev ==> ev (n-2) froom the book does not help because both the premise and conclusion have ev involved.
How is case analysis with rule inversion really working and why do we need to assume things first for Isabelle to give sensible goal for case analysis?
Not sure I understand the entire question but this:
lemma
shows "ev n ⟹ ev (n - 2)"
proof (cases)
Gives you:
proof (state)
goal (2 subgoals):
1. ⟦ev n; ?P⟧ ⟹ ev (n - 2)
2. ⟦ev n; ¬ ?P⟧ ⟹ ev (n - 2)
because Isabelle is splitting on the truth of ev n, i.e. either true or false. I believe the syntax you are looking for is:
proof (cases rule: ev.cases)
Which is how you tell Isabelle explicitly what rule it should use for a proof by cases.
The way to do it is as the answer Ben Sheffield said:
proof (cases rule: ev.cases)
I also noticed that:
apply (rule ev.cases)
works, but I think it would be helpful to go through a small example to see the cases explicitly outlined:
Consider:
lemma "ev n ⟹ ev (n - 2)"
first inspect it's cases theorem:
thm ev.cases
⟦ev ?a; ?a = 0 ⟹ ?P; ⋀n. ⟦?a = Suc (Suc n); ev n⟧ ⟹ ?P⟧ ⟹ ?P
then it unifies the goal and introduces the new goals with the original assumptions and all the assumption for the cases. That is why there is a ev n in all of them.
apply (rule ev.cases)
has goals:
proof (prove)
goal (3 subgoals):
1. ev n ⟹ ev ?a
2. ⟦ev n; ?a = 0⟧ ⟹ ev (n - 2)
3. ⋀na. ⟦ev n; ?a = Suc (Suc na); ev na⟧ ⟹ ev (n - 2)
and you can do simp and proceed the proof as normal.

Proving topology statement in Isabelle

I have been working with limits and topology in and I want to prove the following lemma:
lemma fixes f g :: "real ⇒ real"
assumes
"open S"
"∀a b. a < b <--> f a < f b"
"∀a. (f a)>0"
"continuous_on UNIV (f)"
"∀w∈S. ∀h. (w+h)∈S --> h * (f w) ≤ g (w+h) - g w"
shows "∀w∈S. eventually (λh. f w ≤ (g (w + h) - g w)/h) (at 0)"
using assms unfolding eventually_at
apply (auto simp: divide_simps mult_ac)
I have managed to prove it for two different scenerios:
Here, all instances of h in the inequalities is replaced by |h|. A solution is found almost instantly.
lemma
fixes f g :: "real ⇒ real"
assumes "open S" "∀w∈S. ∀h. (w+h)∈S --> abs(h) * (f w) ≤ g (w+abs(h)) - g w"
shows "∀w∈S. eventually (λh. f w ≤ (g (w + abs(h)) - g w)/abs(h)) (at 0)"
using assms unfolding eventually_at
apply (simp add: divide_simps mult_ac)
by (metis (no_types, hide_lams) add.commute diff_0 diff_add_cancel
diff_minus_eq_add dist_norm open_real_def)
In another scenerio, instead of having a set S, I use the set of real numbers instead (UNIV), and after (simp add: ) I am left with only one case to prove for which sledgehammer finds a solution.
lemma compuniv:
fixes f g :: "real ⇒ real"
assumes "S=UNIV" "open S"
"∀w∈S. ∀h. (w+h)∈S --> h * (f w) ≤ g (w+h) - g w"
shows "∀w∈S. eventually (λh. f w ≤ (g (w + h) - g w)/h) (at 0)"
using assms unfolding eventually_at
apply (simp add: divide_simps mult_ac)
Specifically, I am struggling to understand why when S=UNIV, a solution can be found. Even a method to reduce the problem to proving one sub-case (as when S=UNIV) will help greatly. How can I extend the proofs of the above two cases to prove the main problem?
The bigger picture
This result forms the foundation to proof a result using the real_tendsto_sandwich theorem.
lemma
fixes f g :: "real ⇒ real"
assumes
"open S"
"∀a b. a < b <-> f a < f b"
"∀a. f a > 0"
"continuous_on S (f)"
"∀w∈S. (λh. f (w+h)) -- 0 --> f w"
"∀w∈S. (λh. f w) -- 0 --> f w"
"∀w∈S. eventually (λh. (h ≥ 0 --> f (w+h) ≥ (g (w + h) - g w)/h) ∧
(h ≤ 0 --> f (w+h) ≤ (g (w + h) - g w)/h)) (at 0)"
"∀w∈S. eventually (λh. (h ≥ 0 --> f w ≤ (g (w + h) - g w)/h) ∧
(h ≤ 0 --> f w ≥ (g (w + h) - g w)/h)) (at 0)"
shows "∀w∈S. ((λh. (g (w+h) - g w)/h) ---> f w) (at 0)"
using assms real_tendsto_sandwich`
From the assumptions, it is clear (g (w + h) - g w)/h) is bounded by the f (w+h) and f w when h ≥ 0 and h ≤ 0 therefore taking the limit has h --> 0 yields the result (g (w + h) - g w)/h) --> f w in both cases. Therefore mathematically the final result would be the same. The difficulty is that how can I combine the result when h ≥ 0 and h ≤ 0 to prove the final result?
(Update: I was wrong in my informal explanation, but I think I fixed it. I added some opinions of mine, but I put them at the end, since you didn't ask for them.)
(I assume your use of <--> is a mistake, and it should be <->.)
In all this, I'm working with intuitive ideas of what I think the math means in Topological_Spaces.thy. It's good that you're working on some calculus; this give me a little hope.
(General complaining: The level of formalism in the THY is fairly high, it doesn't sync up directly with ZFC based theories, and as is typical with all the developers of src/HOL and the AFP, the authors don't explain any of it in textbook style, not even in monograph style, not in any style. Style requires the absence of a void.)
If what I give you here is not what you want, you can tell me to delete it, to keep it unanswered so that maybe someone else will come along with something better.
Overview
Below, first I discuss some things about UNIV, and mention some other problems in your last lemma, and with what you say in the last two paragraphs.
I then focus on the fact that the key to all of this is figuring out how h > 0 and h < 0 affects the inequalities, when moving the h from one side to the other.
You might not understand what UNIV is
A key phrase you use in your 2nd to last paragraph is "instead of having a set S, I use the set of real numbers instead (UNIV)".
If you mean S::real set as any subset of the real numbers, versus UNIV::real set, which is all of the real numbers, then that makes sense, but S in all your lemmas is of type real set, type inference, as can be seen in the output panel if types are shown.
Additionally, UNIV is a polymorphic type, 'a set, as shown by this source in src/HOL/Set.thy#l60.
subsubsection {* The universal set -- UNIV *}
abbreviation UNIV :: "'a set" where
"UNIV ≡ top"
lemma UNIV_def:
"UNIV = {x. True}"
by (simp add: top_set_def top_fun_def)
I don't understand what solution you're talking about with "I am struggling to understand why when S=UNIV, a solution can be found", or what two cases you're talking about. I only see one proof goal in all the lemmas. Below, though, I end up using 2 cases as part of a conjunction.
Eliminating UNIV from your lemmas
I don't think UNIV is of key importance here. Also, there might be some conditions in your lemmas that aren't required, though I try to change things as little as possible.
I do get rid of UNIV, because if I can prove a theorem for any real set, then it's also true for UNIV::real set. Consider this:
lemma "(∀S. continuous_on S f) ==> continuous_on UNIV f"
by(simp)
There is also this:
lemma "open (UNIV::real set)"
by(simp)
The first part of your last theorem is this:
lemma
fixes f g :: "real => real"
assumes "S = UNIV"
and "open S"
...
Because you assume S = UNIV, then you don't need open S. Because of that, and because not understanding some things you've said, I now move away from your last lemma, and the last two paragraphs.
I put two use of abs in your 1st lemma, and get rid of UNIV
My goal, like your goal, is to prove theorems with no use of abs h. A mid-level point was inserting two uses of abs h in your 1st lemma, based on what you did:
lemma
fixes f g :: "real => real"
assumes "open S"
and "∀a b. a < b <-> f a < f b"
and "∀a. f a > 0"
and "continuous_on S f"
and "∀w∈S. ∀h. (w + h)∈S --> abs h * f w ≤ g (w + h) - g w"
shows "∀w∈S. eventually (λh. f w ≤ (g (w + h) - g w)/abs h) (at 0)"
using assms unfolding eventually_at
apply (auto simp: divide_simps mult_ac)
by(metis (no_types, hide_lams) add.commute add_diff_cancel add_left_cancel
assms(2) assms(3) diff_0 diff_0_right diff_minus_eq_add dist_norm
monoid_add_class.add.left_neutral mult.commute open_real_def)
There, I eliminated the use of UNIV, and used S, any set of reals.
What's positive or negative in the inequalities is a key point
Related to this is the following basic inequality:
lemma "∀h > 0::real. h * x ≤ y <-> x ≤ y/h"
by(auto simp add: mult_imp_le_div_pos less_eq_real_def mult.commute
pos_less_divide_eq)
In the equality, when the multiplier h is positive, then life is easy, because the direction of the inequality won't change, regardless of the sign of x and y.
At least with Sledgehammer, that's why it's easy to prove the theorems when abs h is used. We don't have to worry about the formula f w ≤ g (w + h) - g w, about whether either side is positive or negative.
Here's how I finally modified your 1st lemma
It's likes this:
lemma
fixes f g :: "real => real"
assumes "open S"
and "∀a b. a < b <-> f a < f b"
and "∀a. f a > 0"
and "continuous_on S f"
and "∀w∈S. ∀h. (w + h)∈S --> h * f w ≤ g (w + h) - g w"
shows "∀w∈S. eventually (λh.
(h > 0 --> f w ≤ (g (w + h) - g w)/h) ∧
(h < 0 --> f w ≥ (g (w + h) - g w)/h)) (at 0)"
using assms unfolding eventually_at
apply (auto simp: divide_simps mult_ac)
by(metis add.commute add_diff_cancel assms(3) assms(4) assms(5) diff_0_right
dist_norm not_less open_real_def)
Here's my explanation (cases: for h > 0 and h < 0)
Two of the conditions in the lemma are these, ∀a b. a < b <-> f a < f b and ∀a. f a > 0, and so f is a positive, monotone increasing function. I don't see that either of those gets used.
Case: h > 0 and (w + h) an element of S
Because ∀w ∈ S. ∀h. (w + h) ∈ S --> h * f w ≤ g (w + h) - g w, then when h > 0 and (w + h) ∈ S, then
h * f w ≤ g (w + h) - g w.
We can multiply by 1/h, if h is not equal to 0, and the direction of the inequality stays the same. In the eventually, I assume the dummy variable is never equal to 0, so the first half the conjunction will eventually be true as h goes to 0.
Case: h < 0 and (w + h) an element of S
Likewise, when h < 0 and (w + h) ∈ S, then
h * f w ≤ g (w + h) - g w.
But because h < 0, if we multiply by 1/h, we have to reverse the direction of the inequality.
Therefore, the second half of the conjunction in the lambda function will eventually be true, as h goes to 0.
Obnoxious update: You didn't ask for my opinion about Stackoverflow etiquette, and I can be an abuser of etiquette myself, such as maybe with this answer, but I think each "tag community" should work to police their own. Unfortunately, etiquette rules aren't clearly stated here, such as at the reddit Rust site, reddit.com/r/rust. I end up doing this, and that's no good either, but maybe it could help influence someone who actually has some influence.
I don't care if you accept my answer here, and you may have reasons for not accepting some of the answers already given to you, but as an example, it's my opinion that you should accept the answer given by R. Thiemann for Substitution in Isabelle.
By not accepting an answer, you're basically saying, "I've not yet received an answer which gives me the information that I want". Additionally, answers not accepted show under the Isabelle tag unanswered category.
I think everyone should understand how few people there are in the world who can answer questions about non-trivial math problems, when implemented in Isabelle/HOL. I'll guess that's there's about 200 people worldwide who actively use Isabelle, who can be considered knowledgeable, proficient users. Out of those, there are fewer even who keep calculus, real analysis, and topology fresh on their mind, and as it's implemented in Isabelle/HOL
The use of Isabelle is a hybrid discipline, combining formal math, logic, and computer science, and at a level of formalism that would typically be at the post-4-year-degree level, partly because there aren't textbooks that explain the Isabelle/HOL logic and math, at an undergraduate level, and partly because it's just hard, graduate-level logic and mathematics.
The quantity of people needed, who have graduate-level knowledge about topology, and who have the time and desire to answers questions about topology, are more likely to operate on mathoverflow.net (this links to a question), and math.stackexchange.com. (Note: I picked that question and answer to show that many answers, on that site, are long or longish, because they try to explain the underlying math of proofs. With Isabelle, if a person is into that kind of thing, like me, then there's even more to explain many times. There can be the math to explain, and then the details of what the Isabelle/HOL syntax means mathematically, such as my comments about UNIV below.)
I say the above because, personally, when I ask a question, I start out with the assumption that I'm not going to get an answer, if a person has to think more than, lets say, 15 minutes. No, make that 5 minutes.
If I get useful information that gives me some insight, then I accept the answer. I would not accept an answer if it was extremely important I get the right information. For math problems, there are always more questions to ask than can be explained by people, so at best, generally, you can only expect to be pointed in the right direction.
You didn't ask for 8 paragraphs of my opinion, but I'm sort of not just talking to you. The problem of people trying to learn to do mathematics in Isabelle/HOL is a big problem, as I see it. We can't say, "Oh, you need to look at Topology in Isabelle/HOL, by James Munkres. There are things like Topology on the AFP, but that's a far cry from a decently written textbook or monograph.
I can delete this answer, or this part of the answer, if that ends up being what I should do.

Resources