Let's say I have a lemma about a simple inductively-defined set:
inductive_set foo :: "'a ⇒ 'a list set" for x :: 'a where
"[] ∈ foo x" | "[x] ∈ foo x"
lemma "⋀x y. y ∈ foo x ⟹ qux x y ⟹ baz x y"
(It's important to me that the "⋀x y" bit stays, because the lemma is actually stating the state of my proof in the middle of a long apply chain.)
I'm having trouble starting the proof of this lemma. I would like to proceed by rule induction.
First attempt
I tried writing
apply (induct rule: foo.induct)
but that doesn't work: the induct method fails. I find I can get around this by fixing x and y explicitly, and then invoking the induct method, like so:
proof -
fix x :: 'a
fix y :: "'a list"
assume "y ∈ foo x" and "qux x y"
thus "baz x y"
apply (induct rule: foo.induct)
oops
However, since I'm actually in the middle of an apply chain, I would rather not enter a structured proof block.
Second attempt
I tried using the induct_tac method instead, but unfortunately induct_tac does not apply the foo.induct rule in the way I would like. If I type
apply (induct_tac rule: foo.induct, assumption)
then the first subgoal is
⋀x y. y ∈ foo x ⟹ qux x y ⟹ baz x []
which is not what I want: I wanted qux x [] instead of qux x y. The induct method got this right, but had other problems, discussed above.
If you first transform your goal to look like this:
⋀x y. y ∈ foo x ⟹ qux x y ⟶ baz x []
then apply (induct_tac rule: foo.induct) will instantiate the induction rule in the way that you want. (It will also leave the object-level implications in the resulting goals, to which you will need to apply (rule impI).)
The induct method does these extra steps dealing with implications automatically, which is one of its major advantages.
On the other hand, induct_tac rule: foo.induct doesn't do anything more than apply (rule foo.induct). (In general, induct_tac can match the variables you specify, and automatically choose an induction rule based on their types, but you're not taking advantage of those features here.)
I think your best option is to go ahead and use a proof block at the end of your apply chain. If you are worried that all the fix, assume and show statements are too verbose, then you can use the little-advertised case goaln feature:
apply ...
apply ...
proof -
case goal1 thus ?case
apply induct
...
qed
Related
I have the following subgoals:
proof (prove)
goal (2 subgoals):
1. ⋀y. ∃x. P x ⟹ P (?x6 y)
2. ⋀y. ∃x. P x ⟹ Q (?y8 y) ⟹ Q y
I want to conclude the proof or continue trying stuff but I don't know how to introduce things into the unknowns (schematic variables) i.e variables with ?.
How does one do that?
Firstly, it is necessary to understand how schematic variables appeared in your subgoals. Normally, unless you are using schematic_goal, schematic variables appear in the subgoals after some form of rule application, whether implicit or explicit.
If the rule application was explicit (e.g. apply (rule conjunct1)), then a reasonably standard methodology for dealing with the problem that you described is to substitute the variables that you wish to 'try' directly into the rule, e.g. apply (rule conjunct1[of A]). In this case, there will be no schematic variables in your goals and, therefore, the problem implicitly disappears.
If the rule application was implicit (e.g. via one of the tools for classical reasoning), then your options depend on whether the subgoals were generated in an apply script or within the body of an Isar proof. Nonetheless, before I proceed, I would like to mention that the proofs where you have to interact with subgoals generated after the application of any 'black-box' methods are not considered to be a very good style (at least, in my opinion).
In the case of the former, there is nothing that you need to do to "try stuff". Once the variable that you wish to substitute (e.g. z) is defined, you can use show "∃x. P x ⟹ P (z y)" in the body of the Isar proof. Similarly, in an apply script, you can resolve with pre-substituted variables.
I demonstrate all these methods in the context of a simplified example below:
context
fixes A B :: bool
assumes AB: "A ∧ B"
begin
lemma A by (rule conjunct1[of _ B]) (rule AB)
lemma A
by (rule conjunct1) (rule AB)
lemma A
proof(rule conjunct1)
show "A ∧ B" by (rule AB)
qed
end
The important parts are already explained by user9716869. I just wanted to add:
You current subgoal is probably not solvable if you don't have additional information available. If you need the x from ∃x. P x to instantiate the schematic variable ?x6 then you need to obtain the value of x before the schematic variable is created.
Schematic variables are instantiated automatically by matching.
This works well if the schematic variable is not a function, so you can just continue to write your proof as if the correct value was already there.
If you want to fix the value in an apply style proof (other cases are already given in the other answer), you could use subgoal_tac followed by assumption:
lemma "⋀y. ∃x. P x ⟹ ∃x::nat. P x"
apply (rule exI)
― ‹⋀y. ∃x. P x ⟹ P (?x y)›
apply (subgoal_tac "P 42", assumption)
― ‹⋀y. ∃x. P x ⟹ P 42›
oops ― ‹Not possible to prove›
lemma "⋀y. ∃x. P x ⟹ ∃x::nat. P x"
apply (erule exE)
― ‹⋀y x. P x ⟹ ∃x. P x›
apply (rule exI)
― ‹⋀y x. P x ⟹ P (?x2 y x)›
apply (subgoal_tac "P x", assumption)
― ‹⋀y x. P x ⟹ P x›
by assumption
Imagine the following theorem:
assumes d: "distinct (map fst zs_ws)"
assumes e: "(p :: complex poly) = lagrange_interpolation_poly zs_ws"
shows "degree p ≤ (length zs_ws)-1 ∧
(∀ x y. (x,y) ∈ set zs_ws ⟶ poly p x = y)"
I would like to eliminate the second assumption, without having to substitute the value of p on each occurrence. I did this in proofs with the let command:
let ?p = lagrange_interpolation_poly zs_ws
But it doesn't work in the theorem statement. Ideas?
You can make a local definition in the lemma statement like this:
lemma l:
fixes zs_ws
defines "p == lagrange_interpolation_poly zs_ws"
assumes d: "distinct (map fst zs_ws)"
shows "degree p ≤ (length zs_ws)-1 ∧ (∀(x,y) ∈ set zs_ws. poly p x = y)"
The definition gets unfolded when the proof is finished. So when you look at thm l later, all occurrences of p have been substituted by the right-hand side. Inside the proof, p_def refers to the definining equation for p (what you call e). The defines clause is most useful when you want to control in the proof when Isabelle's proof tools just see p and when they see the expanded right-hand side.
Question
I would like to understand if there exists a simple method for importing classes into locales.
Alternatively, I would like to understand if there is a simple method that would enable me to use multiple types within the assumptions in classes.
I would like to reuse theorems that are associated with certain pre-defined classes in the library HOL for the development of my own locales. However, it seems to me that, at the moment, there are no standard methods that would allow me to achieve this (e.g. see this question - clause 5).
Unfortunately, my problem will require the definition of structures (i.e. locales or classes) with the assumptions that use multiple types. Thus, I would prefer to use locales. However, I would also like to avoid code duplication and reuse the structures that already exist in the library HOL as much as I can.
theory my_theory
imports Complex_Main
begin
(*It is possible to import other classes, establish a subclass relationship and
use theorems from the super classes. However, if I understand correctly, it
is not trivial to ensure that multiple types can be used in the assumptions
that are associated with the subclass.*)
class my_class = order +
fixes f :: "'a ⇒ real"
begin
subclass order
proof
qed
end
lemma (in my_class) property_class: "⟦ x ≤ y; y ≤ z ⟧ ⟹ x ≤ z"
by auto
(*Multiple types can be used with ease. However, I am not sure how (if
it is possible) to ensure that the lemmas that are associated with the
imported class can be reused in the locale.*)
locale my_locale =
less_eq: order less_eq
for less_eq :: "'a ⇒ 'a ⇒ bool" +
fixes f :: "'a ⇒ 'b"
begin
sublocale order
proof
qed
end
sublocale my_locale ⊆ order
proof
qed
(*nitpick finds a counterexample, because, for example, less_eq is treated
as a free variable.*)
lemma (in my_locale) property_locale: "⟦ x ≤ y; y ≤ z ⟧ ⟹ x ≤ z"
by nitpick
end
Proposed solution
At the moment I am thinking about redefining the minimal amount of axioms in my own locales that is sufficient to establish the equivalence between my locales and the corresponding classes in HOL. However, this approach results in a certain amount of code duplication:
theory my_plan
imports Complex_Main
begin
locale partial_order =
fixes less_eq :: "'a ⇒ 'a ⇒ bool" (infixl "≼" 50)
and less :: "'a ⇒ 'a ⇒ bool" (infixl "≺" 50)
assumes refl [intro, simp]: "x ≼ x"
and anti_sym [intro]: "⟦ x ≼ y; y ≼ x ⟧ ⟹ x = y"
and trans [trans]: "⟦ x ≼ y; y ≼ z ⟧ ⟹ x ≼ z"
and less_eq: "(x ≺ y) = (x ≼ y ∧ x ≠ y)"
begin
end
sublocale partial_order ⊆ order
proof
fix x y z
show "x ≼ x" by simp
show "x ≼ y ⟹ y ≼ z ⟹ x ≼ z" using local.trans by blast
show "x ≼ y ⟹ y ≼ x ⟹ x = y" by blast
show "(x ≺ y) = (x ≼ y ∧ ¬ y ≼ x)" using less_eq by auto
qed
sublocale order ⊆ partial_order
proof
fix x y z
show "x ≤ x" by simp
show "x ≤ y ⟹ y ≤ x ⟹ x = y" by simp
show "x ≤ y ⟹ y ≤ z ⟹ x ≤ z" by simp
show "(x < y) = (x ≤ y ∧ x ≠ y)" by auto
qed
lemma (in partial_order) le_imp_less_or_eq: "x ≼ y ⟹ x ≺ y ∨ x = y"
by (simp add: le_imp_less_or_eq)
end
Is the approach that I intend to follow considered to be an acceptable style for the development of a library in Isabelle? Unfortunately, I have not seen this approach being used within the context of the development of HOL. However, I am still not familiar with a large part of the library.
Also, please let me know if any of the information that is stated in the definition of the question is incorrect: I am new to Isabelle.
General comments that are not directly related to the question
Lastly, as a side note, I have noticed that there may be a certain amount of partial code duplication in HOL. In particular, it seems to me that the theories in HOL/Lattice/, HOL/Algebra/Order-HOL/Algebra/Lattice and HOL/Library/Boolean_Algebra resemble the theory in HOL/Orderings-HOL/Lattices. However, I am not certain if the equivalence between these theories was established through the sublocale/subclass relationship (e.g. see class_deps) to a sufficient extent. Of course, I understand that the theories use distinct axiomatisation and the theories in HOL/Algebra/ and HOL/Library/Boolean_Algebra are based on locales. Furthermore, the theories in HOL/Algebra/ contain a certain amount of information that has not been formalised in other theories. However, I would still like to gain a better understanding of why all four theories co-exist in HOL and the relationship between these theories is not always clearly indicated.
A solution to the problem was proposed on the mailing list of Isabelle by Akihisa Yamada and is available at the following hyperlink: link. A copy of the solution (with minor changes to formatting) is also provided below for a reference with the permission of the author.
It should be noted that the proposed solution has also been used in the context of the development of HOL.
Solution proposed by Akihisa Yamada
let me comment to your technical questions as I also tackled the same goal as you. I'll be happy if there's a better solution, though.
lemma (in my_locale) property_locale: "⟦ x ≤ y; y ≤ z ⟧ ⟹ x ≤ z"
by nitpick
Interpreting a class as a locale doesn't seem to import notations, so here "≤" refers to the global one for "ord", which assumes nothing (you can check by ctrl+hover on x etc.).
My solution is to define a locale for syntax and interpret it (sublocale is somehow slow) whenever you want to use the syntax.
locale ord_syntax = ord
begin
notation less_eq (infix "⊑" 50)
notation less (infix "⊏" 50)
abbreviation greater_eq_syntax (infix "⊒" 50) where
"greater_eq_syntax ≡ ord.greater_eq less_eq"
abbreviation greater_syntax (infix "⊐" 50) where
"greater_syntax ≡ ord.greater less"
end
context my_locale begin
interpretation ord_syntax.
lemma property_locale: "⟦ x ⊑ y; y ⊑ z ⟧ ⟹ x ⊑ z" using less_eq.order_trans.
end
Consider the following following definition definition phi :: "nat ⇒ nat" where "phi n = card {k∈{0<..n}. coprime n k}" (see also this answer)
How can I then prove a very basic fact, like phi(p)=p-1 for a prime p ? Here is one possible formalization of this lemma, though I'm not sure it's the best one:
lemma basic:
assumes "prime_elem (p::nat) = true"
shows "phi p = p-1"
(prime_elem is defined in Factorial_Ring.thy)
Using try resp. try0 doesn't lead anywhere. (A proof by hand is immediate though, since the GCD between any m less than p and p is 1. But poking around various file didn't turn out to be very helpful, I imagine I have to guess some clever lemma that I have to give auto for the proof to succeed.)
First of all, true doesn't exist. Isabelle interprets this as a free Boolean variable (as you can see by the fact that it is printed blue). You mean True. Also, writing prime_elem p = True is somewhat unidiomatic; just write prime_elem p.
Next, I would suggest using prime p. It's equivalent to prime_elem on the naturals; for other types, the difference is that prime also requires the element to be ‘canonical’, i.e. 2 :: int is prime, but -2 :: int is not.
So your lemma looks like this:
lemma basic:
assumes "prime_elem (p::nat)"
shows "phi p = p - 1"
proof -
Next, you should prove the following:
from assms have "{k∈{0<..p}. coprime p k} = {0<..<p}"
If you throw auto at this, you'll get two subgoals, and sledgehammer can solve them both, so you're done. However, the resulting proof is a bit ugly:
apply auto
apply (metis One_nat_def gcd_nat.idem le_less not_prime_1)
by (simp add: prime_nat_iff'')
You can then simply prove your overall goal with this:
thus ?thesis by (simp add: phi_def)
A more reasonable and robust way would be this Isar proof:
lemma basic:
assumes "prime (p::nat)"
shows "phi p = p - 1"
proof -
have "{k∈{0<..p}. coprime p k} = {0<..<p}"
proof safe
fix x assume "x ∈ {0<..p}" "coprime p x"
with assms show "x ∈ {0<..<p}" by (cases "x = p") auto
next
fix x assume "x ∈ {0<..<p}"
with assms show "coprime p x" by (simp add: prime_nat_iff'')
qed auto
thus ?thesis by (simp add: phi_def)
qed
By the way, I would recommend restructuring your definitions in the following way:
definition rel_primes :: "nat ⇒ nat set" where
"rel_primes n = {k ∈ {0<..n}. coprime k n}"
definition phi :: "nat ⇒ nat" where
"phi n = card (rel_primes n)"
Then you can prove nice auxiliary lemmas for rel_primes. (You'll need them for more complicated properties of the totient function)
I define an inductive relation called step_g. Here is one of the inference rules:
G_No_Op:
"∀j ∈ the (T i). ¬ (eval_bool p (the (γ ⇩t⇩s j)))
⟹ step_g a i T (γ, (Barrier, p)) (Some γ)"
I want to invoke this rule in a proof, so I type
apply (rule step_g.G_No_Op)
but the rule cannot be applied, because its conclusion must be of a particular form already (the two γ's must match). So I adapt the rule like so:
lemma G_No_Op_helper:
"⟦ ∀j ∈ the (T i). ¬ (eval_bool p (the (γ ⇩t⇩s j))) ; γ = γ' ⟧
⟹ step_g a i T (γ, (Barrier, p)) (Some γ')"
by (simp add: step_g.G_No_Op)
Now, when I invoke rule G_No_Op_helper, the requirement that "the two γ's must match" becomes a subgoal to be proven.
The transformation of G_No_Op into G_No_Op_helper looks rather mechanical. My question is: is there a way to make Isabelle do this automatically?
Edit. I came up with a "minimal working example". In the following, lemma A is equivalent to A2, but rule A doesn't help to prove the theorem, only rule A2 works.
consts foo :: "nat ⇒ nat ⇒ nat ⇒ bool"
lemma A: "x < y ⟹ foo y x x"
sorry
lemma A2: "⟦ x < y ; x = z ⟧ ⟹ foo y x z"
sorry
theorem "foo y x z"
apply (rule A)
To my knowledge, nothing exists to automate these things. One could probably implement this as an attribute, i.e.
thm A[generalised x]
to obtain something like A2. The attribute would replace every occurence of the variable it is given (i.e. x here) but the first in the conclusion of the theorem with a fresh variable x' and add the premise x' = x to the theorem.
This shouldn't be very hard to implement for someone more skilled in Isabelle/ML than me – maybe some of the advanced Isabelle/ML hackers who read this could comment on the idea.
There is a well-known principle "proof-by-definition", i.e. you write your initial specifications in a way such the the resulting rules are easy to apply. This might occasionally look unexpected to informal readers, but is normal for formalists.
I had similar problems and wrote a method named fuzzy_rule, which can be used like this:
theorem "foo y x z"
apply (fuzzy_rule A)
subgoal "x < y"
sorry
subgoal "x = z"
sorry
The code is available at https://github.com/peterzeller/isabelle_fuzzy_rule