Question
I would like to understand if there exists a simple method for importing classes into locales.
Alternatively, I would like to understand if there is a simple method that would enable me to use multiple types within the assumptions in classes.
I would like to reuse theorems that are associated with certain pre-defined classes in the library HOL for the development of my own locales. However, it seems to me that, at the moment, there are no standard methods that would allow me to achieve this (e.g. see this question - clause 5).
Unfortunately, my problem will require the definition of structures (i.e. locales or classes) with the assumptions that use multiple types. Thus, I would prefer to use locales. However, I would also like to avoid code duplication and reuse the structures that already exist in the library HOL as much as I can.
theory my_theory
imports Complex_Main
begin
(*It is possible to import other classes, establish a subclass relationship and
use theorems from the super classes. However, if I understand correctly, it
is not trivial to ensure that multiple types can be used in the assumptions
that are associated with the subclass.*)
class my_class = order +
fixes f :: "'a ⇒ real"
begin
subclass order
proof
qed
end
lemma (in my_class) property_class: "⟦ x ≤ y; y ≤ z ⟧ ⟹ x ≤ z"
by auto
(*Multiple types can be used with ease. However, I am not sure how (if
it is possible) to ensure that the lemmas that are associated with the
imported class can be reused in the locale.*)
locale my_locale =
less_eq: order less_eq
for less_eq :: "'a ⇒ 'a ⇒ bool" +
fixes f :: "'a ⇒ 'b"
begin
sublocale order
proof
qed
end
sublocale my_locale ⊆ order
proof
qed
(*nitpick finds a counterexample, because, for example, less_eq is treated
as a free variable.*)
lemma (in my_locale) property_locale: "⟦ x ≤ y; y ≤ z ⟧ ⟹ x ≤ z"
by nitpick
end
Proposed solution
At the moment I am thinking about redefining the minimal amount of axioms in my own locales that is sufficient to establish the equivalence between my locales and the corresponding classes in HOL. However, this approach results in a certain amount of code duplication:
theory my_plan
imports Complex_Main
begin
locale partial_order =
fixes less_eq :: "'a ⇒ 'a ⇒ bool" (infixl "≼" 50)
and less :: "'a ⇒ 'a ⇒ bool" (infixl "≺" 50)
assumes refl [intro, simp]: "x ≼ x"
and anti_sym [intro]: "⟦ x ≼ y; y ≼ x ⟧ ⟹ x = y"
and trans [trans]: "⟦ x ≼ y; y ≼ z ⟧ ⟹ x ≼ z"
and less_eq: "(x ≺ y) = (x ≼ y ∧ x ≠ y)"
begin
end
sublocale partial_order ⊆ order
proof
fix x y z
show "x ≼ x" by simp
show "x ≼ y ⟹ y ≼ z ⟹ x ≼ z" using local.trans by blast
show "x ≼ y ⟹ y ≼ x ⟹ x = y" by blast
show "(x ≺ y) = (x ≼ y ∧ ¬ y ≼ x)" using less_eq by auto
qed
sublocale order ⊆ partial_order
proof
fix x y z
show "x ≤ x" by simp
show "x ≤ y ⟹ y ≤ x ⟹ x = y" by simp
show "x ≤ y ⟹ y ≤ z ⟹ x ≤ z" by simp
show "(x < y) = (x ≤ y ∧ x ≠ y)" by auto
qed
lemma (in partial_order) le_imp_less_or_eq: "x ≼ y ⟹ x ≺ y ∨ x = y"
by (simp add: le_imp_less_or_eq)
end
Is the approach that I intend to follow considered to be an acceptable style for the development of a library in Isabelle? Unfortunately, I have not seen this approach being used within the context of the development of HOL. However, I am still not familiar with a large part of the library.
Also, please let me know if any of the information that is stated in the definition of the question is incorrect: I am new to Isabelle.
General comments that are not directly related to the question
Lastly, as a side note, I have noticed that there may be a certain amount of partial code duplication in HOL. In particular, it seems to me that the theories in HOL/Lattice/, HOL/Algebra/Order-HOL/Algebra/Lattice and HOL/Library/Boolean_Algebra resemble the theory in HOL/Orderings-HOL/Lattices. However, I am not certain if the equivalence between these theories was established through the sublocale/subclass relationship (e.g. see class_deps) to a sufficient extent. Of course, I understand that the theories use distinct axiomatisation and the theories in HOL/Algebra/ and HOL/Library/Boolean_Algebra are based on locales. Furthermore, the theories in HOL/Algebra/ contain a certain amount of information that has not been formalised in other theories. However, I would still like to gain a better understanding of why all four theories co-exist in HOL and the relationship between these theories is not always clearly indicated.
A solution to the problem was proposed on the mailing list of Isabelle by Akihisa Yamada and is available at the following hyperlink: link. A copy of the solution (with minor changes to formatting) is also provided below for a reference with the permission of the author.
It should be noted that the proposed solution has also been used in the context of the development of HOL.
Solution proposed by Akihisa Yamada
let me comment to your technical questions as I also tackled the same goal as you. I'll be happy if there's a better solution, though.
lemma (in my_locale) property_locale: "⟦ x ≤ y; y ≤ z ⟧ ⟹ x ≤ z"
by nitpick
Interpreting a class as a locale doesn't seem to import notations, so here "≤" refers to the global one for "ord", which assumes nothing (you can check by ctrl+hover on x etc.).
My solution is to define a locale for syntax and interpret it (sublocale is somehow slow) whenever you want to use the syntax.
locale ord_syntax = ord
begin
notation less_eq (infix "⊑" 50)
notation less (infix "⊏" 50)
abbreviation greater_eq_syntax (infix "⊒" 50) where
"greater_eq_syntax ≡ ord.greater_eq less_eq"
abbreviation greater_syntax (infix "⊐" 50) where
"greater_syntax ≡ ord.greater less"
end
context my_locale begin
interpretation ord_syntax.
lemma property_locale: "⟦ x ⊑ y; y ⊑ z ⟧ ⟹ x ⊑ z" using less_eq.order_trans.
end
Related
I have the following subgoals:
proof (prove)
goal (2 subgoals):
1. ⋀y. ∃x. P x ⟹ P (?x6 y)
2. ⋀y. ∃x. P x ⟹ Q (?y8 y) ⟹ Q y
I want to conclude the proof or continue trying stuff but I don't know how to introduce things into the unknowns (schematic variables) i.e variables with ?.
How does one do that?
Firstly, it is necessary to understand how schematic variables appeared in your subgoals. Normally, unless you are using schematic_goal, schematic variables appear in the subgoals after some form of rule application, whether implicit or explicit.
If the rule application was explicit (e.g. apply (rule conjunct1)), then a reasonably standard methodology for dealing with the problem that you described is to substitute the variables that you wish to 'try' directly into the rule, e.g. apply (rule conjunct1[of A]). In this case, there will be no schematic variables in your goals and, therefore, the problem implicitly disappears.
If the rule application was implicit (e.g. via one of the tools for classical reasoning), then your options depend on whether the subgoals were generated in an apply script or within the body of an Isar proof. Nonetheless, before I proceed, I would like to mention that the proofs where you have to interact with subgoals generated after the application of any 'black-box' methods are not considered to be a very good style (at least, in my opinion).
In the case of the former, there is nothing that you need to do to "try stuff". Once the variable that you wish to substitute (e.g. z) is defined, you can use show "∃x. P x ⟹ P (z y)" in the body of the Isar proof. Similarly, in an apply script, you can resolve with pre-substituted variables.
I demonstrate all these methods in the context of a simplified example below:
context
fixes A B :: bool
assumes AB: "A ∧ B"
begin
lemma A by (rule conjunct1[of _ B]) (rule AB)
lemma A
by (rule conjunct1) (rule AB)
lemma A
proof(rule conjunct1)
show "A ∧ B" by (rule AB)
qed
end
The important parts are already explained by user9716869. I just wanted to add:
You current subgoal is probably not solvable if you don't have additional information available. If you need the x from ∃x. P x to instantiate the schematic variable ?x6 then you need to obtain the value of x before the schematic variable is created.
Schematic variables are instantiated automatically by matching.
This works well if the schematic variable is not a function, so you can just continue to write your proof as if the correct value was already there.
If you want to fix the value in an apply style proof (other cases are already given in the other answer), you could use subgoal_tac followed by assumption:
lemma "⋀y. ∃x. P x ⟹ ∃x::nat. P x"
apply (rule exI)
― ‹⋀y. ∃x. P x ⟹ P (?x y)›
apply (subgoal_tac "P 42", assumption)
― ‹⋀y. ∃x. P x ⟹ P 42›
oops ― ‹Not possible to prove›
lemma "⋀y. ∃x. P x ⟹ ∃x::nat. P x"
apply (erule exE)
― ‹⋀y x. P x ⟹ ∃x. P x›
apply (rule exI)
― ‹⋀y x. P x ⟹ P (?x2 y x)›
apply (subgoal_tac "P x", assumption)
― ‹⋀y x. P x ⟹ P x›
by assumption
Imagine the following theorem:
assumes d: "distinct (map fst zs_ws)"
assumes e: "(p :: complex poly) = lagrange_interpolation_poly zs_ws"
shows "degree p ≤ (length zs_ws)-1 ∧
(∀ x y. (x,y) ∈ set zs_ws ⟶ poly p x = y)"
I would like to eliminate the second assumption, without having to substitute the value of p on each occurrence. I did this in proofs with the let command:
let ?p = lagrange_interpolation_poly zs_ws
But it doesn't work in the theorem statement. Ideas?
You can make a local definition in the lemma statement like this:
lemma l:
fixes zs_ws
defines "p == lagrange_interpolation_poly zs_ws"
assumes d: "distinct (map fst zs_ws)"
shows "degree p ≤ (length zs_ws)-1 ∧ (∀(x,y) ∈ set zs_ws. poly p x = y)"
The definition gets unfolded when the proof is finished. So when you look at thm l later, all occurrences of p have been substituted by the right-hand side. Inside the proof, p_def refers to the definining equation for p (what you call e). The defines clause is most useful when you want to control in the proof when Isabelle's proof tools just see p and when they see the expanded right-hand side.
Consider the following following definition definition phi :: "nat ⇒ nat" where "phi n = card {k∈{0<..n}. coprime n k}" (see also this answer)
How can I then prove a very basic fact, like phi(p)=p-1 for a prime p ? Here is one possible formalization of this lemma, though I'm not sure it's the best one:
lemma basic:
assumes "prime_elem (p::nat) = true"
shows "phi p = p-1"
(prime_elem is defined in Factorial_Ring.thy)
Using try resp. try0 doesn't lead anywhere. (A proof by hand is immediate though, since the GCD between any m less than p and p is 1. But poking around various file didn't turn out to be very helpful, I imagine I have to guess some clever lemma that I have to give auto for the proof to succeed.)
First of all, true doesn't exist. Isabelle interprets this as a free Boolean variable (as you can see by the fact that it is printed blue). You mean True. Also, writing prime_elem p = True is somewhat unidiomatic; just write prime_elem p.
Next, I would suggest using prime p. It's equivalent to prime_elem on the naturals; for other types, the difference is that prime also requires the element to be ‘canonical’, i.e. 2 :: int is prime, but -2 :: int is not.
So your lemma looks like this:
lemma basic:
assumes "prime_elem (p::nat)"
shows "phi p = p - 1"
proof -
Next, you should prove the following:
from assms have "{k∈{0<..p}. coprime p k} = {0<..<p}"
If you throw auto at this, you'll get two subgoals, and sledgehammer can solve them both, so you're done. However, the resulting proof is a bit ugly:
apply auto
apply (metis One_nat_def gcd_nat.idem le_less not_prime_1)
by (simp add: prime_nat_iff'')
You can then simply prove your overall goal with this:
thus ?thesis by (simp add: phi_def)
A more reasonable and robust way would be this Isar proof:
lemma basic:
assumes "prime (p::nat)"
shows "phi p = p - 1"
proof -
have "{k∈{0<..p}. coprime p k} = {0<..<p}"
proof safe
fix x assume "x ∈ {0<..p}" "coprime p x"
with assms show "x ∈ {0<..<p}" by (cases "x = p") auto
next
fix x assume "x ∈ {0<..<p}"
with assms show "coprime p x" by (simp add: prime_nat_iff'')
qed auto
thus ?thesis by (simp add: phi_def)
qed
By the way, I would recommend restructuring your definitions in the following way:
definition rel_primes :: "nat ⇒ nat set" where
"rel_primes n = {k ∈ {0<..n}. coprime k n}"
definition phi :: "nat ⇒ nat" where
"phi n = card (rel_primes n)"
Then you can prove nice auxiliary lemmas for rel_primes. (You'll need them for more complicated properties of the totient function)
Proving a simple theorem I came across meta-level implications in the proof. Is it OK to have them or could they be avoided? If I should handle them, is this the right way to do so?
theory Sandbox
imports Main
begin
lemma "(x::nat) > 0 ∨ x = 0"
proof (cases x)
assume "x = 0"
show "0 < x ∨ x = 0" by (auto)
next
have "x = Suc n ⟹ 0 < x" by (simp only: Nat.zero_less_Suc)
then have "x = Suc n ⟹ 0 < x ∨ x = 0" by (auto)
then show "⋀nat. x = Suc nat ⟹ 0 < x ∨ x = 0" by (auto)
qed
end
I guess this could be proved more easily but I wanted to have a structured proof.
In principle meta-implication ==> is nothing to be avoided (in fact its the "native" way to express inference rules in Isabelle). There is a canonical way that often allows us to avoid meta-implication when writing Isar proofs. E.g., for a general goal
"!!x. A ==> B"
we can write in Isar
fix x
assume "A"
...
show "B"
For your specific example, when looking at it in Isabelle/jEdit you might notice that
the n of the second case is highlighted. The reason is that it is a free variable. While this is not a problem per se, it is more canonical to fix such variables locally (like the typical statement "for an arbitrary but fixed ..." in textbooks). E.g.,
next
fix n
assume "x = Suc n"
then have "0 < x" by (simp only: Nat.zero_less_Suc)
then show "0 < x ∨ x = 0" ..
qed
Here it can again be seen how fix/assume/show in Isar corresponds to the actual goal, i.e.,
1. ⋀nat. x = Suc nat ⟹ 0 < x ∨ x = 0
When writing structured proofs, it is best to avoid meta-implication (and quantification) for the outermost structure of the subgoal. I.e. instead of talking about
⋀x. P x ⟹ Q x ⟹ R x
you should use
fix x
assume "P x" "Q x"
...
show "R x"
If P x and Q x have some structure, it is fine to use meta-implication and -quantification for these.
There is a number of reasons to prefer fix/assumes over the meta-operators in structured proofs.
Somewhat trivially, you do not have to state them again in every have and show statement.
More important, when you use fix to quantify a variable, it stays the same in the whole proof. If you use ⋀, it is freshly quantified in each have statement (and doesn't exist outside). This makes it impossible to refer to this variable directly and often complicates the search space for automated tools. Similar things hold for assume vs ⟹.
A more intricate point is the behaviour of show in the presence of meta-implications. Consider the following proof attempt:
lemma "P ⟷ Q"
proof
show "P ⟹ Q" sorry
next
show "Q ⟹ P" sorry
qed
After the proof command, there are two subgoals: P ⟹ Q and Q ⟹ P. Nevertheless, the final qed fails. How did this happen?
The first show applies the rule P ⟹ Q to the first applicable subgoal, namely P ⟹ Q. Using the usual rule resolution mechanism of Isabelle, this yields P ⟹ P (assume Pshow Q` would have removed the subgoal).
The second show applies the rule Q ⟹ P to the first applicable subgoal: This is now P ⟹ P (as Q ⟹ P is the second subgoal), yielding P ⟹ Q again.
As a result, we are still have the two subgoals P ⟹ Q and Q ⟹ P and qed cannot close the goal.
In many cases, we don't notice this behaviour of show, as trivial subgoals like P ⟹ P can be solved by qed.
A few words on the behavior of show: As we have seen above, meta-implication in show does not correspond to assume. Instead, it corresponds to assumes lesser known brother, presume. presume allows you to introduce new assumptions, but requires you to discharge them afterwards. As an example, compare
lemma "P 2 ⟹ P 0"
proof -
presume "P 1" then show "P 0" sorry
next
assume "P 2" then show "P 1" sorry
qed
and
lemma "P 2 ⟹ P 0"
proof -
show "P 1 ⟹ P 0" sorry
next
assume "P 2" then show "P 1" sorry
qed
I define an inductive relation called step_g. Here is one of the inference rules:
G_No_Op:
"∀j ∈ the (T i). ¬ (eval_bool p (the (γ ⇩t⇩s j)))
⟹ step_g a i T (γ, (Barrier, p)) (Some γ)"
I want to invoke this rule in a proof, so I type
apply (rule step_g.G_No_Op)
but the rule cannot be applied, because its conclusion must be of a particular form already (the two γ's must match). So I adapt the rule like so:
lemma G_No_Op_helper:
"⟦ ∀j ∈ the (T i). ¬ (eval_bool p (the (γ ⇩t⇩s j))) ; γ = γ' ⟧
⟹ step_g a i T (γ, (Barrier, p)) (Some γ')"
by (simp add: step_g.G_No_Op)
Now, when I invoke rule G_No_Op_helper, the requirement that "the two γ's must match" becomes a subgoal to be proven.
The transformation of G_No_Op into G_No_Op_helper looks rather mechanical. My question is: is there a way to make Isabelle do this automatically?
Edit. I came up with a "minimal working example". In the following, lemma A is equivalent to A2, but rule A doesn't help to prove the theorem, only rule A2 works.
consts foo :: "nat ⇒ nat ⇒ nat ⇒ bool"
lemma A: "x < y ⟹ foo y x x"
sorry
lemma A2: "⟦ x < y ; x = z ⟧ ⟹ foo y x z"
sorry
theorem "foo y x z"
apply (rule A)
To my knowledge, nothing exists to automate these things. One could probably implement this as an attribute, i.e.
thm A[generalised x]
to obtain something like A2. The attribute would replace every occurence of the variable it is given (i.e. x here) but the first in the conclusion of the theorem with a fresh variable x' and add the premise x' = x to the theorem.
This shouldn't be very hard to implement for someone more skilled in Isabelle/ML than me – maybe some of the advanced Isabelle/ML hackers who read this could comment on the idea.
There is a well-known principle "proof-by-definition", i.e. you write your initial specifications in a way such the the resulting rules are easy to apply. This might occasionally look unexpected to informal readers, but is normal for formalists.
I had similar problems and wrote a method named fuzzy_rule, which can be used like this:
theorem "foo y x z"
apply (fuzzy_rule A)
subgoal "x < y"
sorry
subgoal "x = z"
sorry
The code is available at https://github.com/peterzeller/isabelle_fuzzy_rule