Given a ring, together with a bijection to another set (n.b. set not type), it ought to be "trivial" to define the (unique) induced ring structure on the "other set" that makes the bijection an isomorphism. But my proof that tries to use this takes 101 lines (plus definitions and lemmas). Is there another approach using e.g. "lifting" that makes the whole thing fall out more easily?
Here's an example that could be filled in to do what I want:
theory InducedRing
imports "HOL-Algebra.QuotRing"
begin
lemma
assumes "ring ℛ" "bij_betw φ (carrier ℛ) S"
obtains 𝒮 where "ring 𝒮" and "carrier 𝒮 = S" and "φ ∈ ring_hom ℛ 𝒮"
sorry
end
Feel free to rewrite this to an equivalent or stronger statement (e.g., reverse the direction of \<phi> or even assume a pair of inverse bijections, make the hypotheses part of a locale definition, assert uniqueness of \<S>, change the names according to your own taste....)
The goal is to make the proof feel short and natural. I'm hoping there's a theorem somewhere in HOL-Algebra or another library that will make this easy, but sledgehammer was not able to find it for me -- at least, not with this version of the statement.
I do not think you will get around proofing the ring axioms, except if someone has already proven a similar lemma. I have no experience with HOL-Algebra, so maybe there is something nicer.
Luckily automation seems to handle them fairly well (except one, where auto needs a bit of help). I obtained the proofs by hitting the goal with auto and sledgehammering what was left to search for relevant lemmas to add to auto.
lemma
assumes "ring ℛ" "bij_betw φ (carrier ℛ) S"
obtains 𝒮 where "ring 𝒮" and "carrier 𝒮 = S" and "φ ∈ ring_hom ℛ 𝒮"
proof-
interpret ℛ: ring ℛ
using assms(1) .
(* Define the ring *)
let ?Z = "φ (zero ℛ)"
let ?O = "φ (one ℛ)"
let ?mult = "λx y . φ (inv_into (carrier ℛ) φ x ⊗⇘ℛ⇙ inv_into (carrier ℛ) φ y)"
let ?add = "λx y . φ (inv_into (carrier ℛ) φ x ⊕⇘ℛ⇙ inv_into (carrier ℛ) φ y)"
let ?𝒮 = "⦇ carrier = S, mult = ?mult , one = ?O, zero = ?Z, add = ?add,
… = (undefined :: 'd) ⦈ :: ('c, 'd) ring_scheme"
show thesis
proof(rule that[of ?𝒮])
(* The only part where auto needed some help *)
have helper: "∃y∈S. φ (inv_into (carrier ℛ) φ x ⊕⇘ℛ⇙ inv_into (carrier ℛ) φ y) = φ 𝟬⇘ℛ⇙"
if "x ∈ S" for x
using assms that by (intro bexI[where x="φ (⊖⇘ℛ⇙ (inv_into (carrier ℛ) φ x))"])
(auto intro: bij_betw_apply
simp add: ring.ring_simprules bij_betw_imp_inj_on bij_betw_imp_surj_on inv_into_into)
show "ring ?𝒮"
using assms by unfold_locales
(auto intro: bij_betw_apply simp add: ring.ring_simprules bij_betw_inv_into_right
bij_betw_imp_inj_on bij_betw_imp_surj_on inv_into_into Units_def helper)
next
show "carrier ?𝒮 = S"
by simp
next
from assms(2) show "φ ∈ ring_hom ℛ ?𝒮"
by (intro ring_hom_memI) (auto intro: bij_betw_apply simp add: bij_betw_inv_into_left)
qed
qed
In my proof (and yours as well) I explicitly give the definitions of the induced ring, which makes the obtains in the goal statement a bit pointless in my opinion. Similarly, the S can be computed from carrier ℛ. So why not have the concrete definitions available on the toplevel, similar to for example this:
abbreviation "bij_ring ℛ φ ≡ ⦇carrier = φ ` carrier ℛ,
mult = λx y . φ (inv_into (carrier ℛ) φ x ⊗⇘ℛ⇙ inv_into (carrier ℛ) φ y), one = φ (one ℛ),
zero = φ (zero ℛ), add = λx y . φ (inv_into (carrier ℛ) φ x ⊕⇘ℛ⇙ inv_into (carrier ℛ) φ y),
… = (undefined :: 'd) ⦈ :: ('c, 'd) ring_scheme"
lemma
assumes "ring ℛ" "bij_betw φ (carrier ℛ) (φ ` (carrier ℛ))"
shows "ring (bij_ring ℛ φ)" and "φ ∈ ring_hom ℛ (bij_ring ℛ φ)"
proof-
interpret ℛ: ring ℛ
using assms(1) .
(* Sadly it still needs help *)
from assms have helper: "∃x∈carrier ℛ. φ (y ⊕⇘ℛ⇙ x) = φ 𝟬⇘ℛ⇙"
if "y ∈ carrier ℛ" for y
using that by (metis ℛ.add.r_inv_ex)
from assms show "ring (bij_ring ℛ φ)"
by unfold_locales (auto simp add: bij_betw_inv_into_left
ℛ.add.m_assoc bij_betw_imp_inj_on ring.ring_simprules Units_def helper)
from assms(2) show "φ ∈ ring_hom ℛ (bij_ring ℛ φ)"
by (intro ring_hom_memI) (auto intro: bij_betw_apply simp add: bij_betw_inv_into_left)
qed
This also seems to simplify the proofs a bit. Sadly there is some duplication in the goal statement, but I cannot see a way to improve this now.
Here's my third attempt, using a neat lemma I discovered called ring_iso_imp_img_ring:
theory InducedRing
imports "HOL-Algebra.QuotRing"
begin
lemma induced_ring:
assumes ring_R: "ring ℛ" and phi_bij: "bij_betw φ (carrier ℛ) S"
obtains 𝒮 where "ring 𝒮" and "carrier 𝒮 = S" and "φ ∈ ring_hom ℛ 𝒮"
proof -
interpret R_: ring ℛ by (rule ring_R)
let ?R = "carrier ℛ"
define ψ where "ψ = the_inv_into ?R φ"
have [simp]: "r∈?R ⟹ ψ (φ r) = r" for r by (metis ψ_def phi_bij bij_betw_def the_inv_into_f_eq)
define S_add where [simp]: "S_add x' y' = φ (ψ x' ⊕⇘ℛ⇙ ψ y')" for x' y'
define S_zero where "S_zero = φ 𝟬⇘ℛ⇙"
define S_mult where [simp]: "S_mult x' y' = φ (ψ x' ⊗⇘ℛ⇙ ψ y')" for x' y'
define S_one where [simp]: "S_one = φ 𝟭⇘ℛ⇙"
define 𝒮 where "𝒮 = ⦇carrier = S, mult = S_mult, one = S_one, zero = S_zero, add = S_add,
… = m::'d⦈"
have φ_in_ring_hom: "φ ∈ ring_hom ℛ 𝒮"
proof (auto simp add: ring_hom_def 𝒮_def)
fix x assume x: "x ∈ ?R"
show "φ x ∈ S" using bij_betwE phi_bij x by auto
qed
have carrier_S: "carrier 𝒮 = S" by (simp add: 𝒮_def)
have φ_in_ring_iso: "φ ∈ ring_iso ℛ 𝒮"
using φ_in_ring_hom ring_iso_def phi_bij carrier_S by blast
have [simp]: "𝒮⦇zero := φ 𝟬⇘ℛ⇙⦈ = 𝒮"
apply (rule ring.equality, simp_all)
using S_zero_def 𝒮_def by auto
interpret 𝒮_: ring 𝒮 using R_.ring_iso_imp_img_ring[OF φ_in_ring_iso] by simp
show ?thesis using that[OF 𝒮_.is_ring carrier_S φ_in_ring_hom] .
qed
end
Previous version
Here's my second attempt. It's somewhat more streamlined than my first, but I'm still hoping someone can come up with a much simpler way to do this that does not require restating and verifying the ring axioms one-by-one (in some cases, multiple times).
theory InducedRing
imports "HOL-Algebra.Ring"
begin
lemma
assumes ring_R: "ring ℛ" and phi_bij: "bij_betw φ (carrier ℛ) S"
obtains 𝒮 where "ring 𝒮" and "carrier 𝒮 = S" and "φ ∈ ring_hom ℛ 𝒮"
proof -
interpret R_: ring ℛ by (rule ring_R)
let ?R = "carrier ℛ"
define ψ where "ψ = the_inv_into ?R φ"
have [simp]: "r∈?R ⟹ ψ (φ r) = r" for r by (metis ψ_def phi_bij bij_betw_def the_inv_into_f_eq)
have [simp]: "s∈ S ⟹ φ (ψ s) = s" for s by (metis ψ_def phi_bij f_the_inv_into_f_bij_betw)
define S_add where [simp]: "S_add x' y' = φ (ψ x' ⊕⇘ℛ⇙ ψ y')" for x' y'
then have S_add_closed: "⟦x'∈S; y'∈S⟧ ⟹ S_add x' y' ∈ S" for x' y'
by (metis R_.add.m_closed ψ_def bij_betwE bij_betw_the_inv_into phi_bij)
define S_zero where [simp]: "S_zero = φ 𝟬⇘ℛ⇙"
then have S_zero_closed: "S_zero ∈ S" using bij_betwE phi_bij by blast
define S_mult where [simp]: "S_mult x' y' = φ (ψ x' ⊗⇘ℛ⇙ ψ y')" for x' y'
then have S_mult_closed: "⟦x'∈S; y'∈S⟧ ⟹ S_mult x' y' ∈ S" for x' y'
by (metis R_.m_closed ψ_def bij_betwE bij_betw_the_inv_into phi_bij)
define S_one where [simp]: "S_one = φ 𝟭⇘ℛ⇙"
then have S_one_closed: "S_one ∈ S" using bij_betwE phi_bij by blast
define 𝒮 where "𝒮 = ⦇carrier = S, mult = S_mult, one = S_one, zero = S_zero, add = S_add,
… = m::'d⦈"
have φ_in_ring_hom: "φ ∈ ring_hom ℛ 𝒮"
proof (auto simp add: ring_hom_def 𝒮_def)
fix x assume x: "x ∈ ?R"
show "φ x ∈ S" using bij_betwE phi_bij x by auto
qed
have carrier_S: "carrier 𝒮 = S" by (simp add: 𝒮_def)
interpret 𝒮_: ring 𝒮
proof (unfold_locales, auto simp add: Units_def carrier_S)
show "𝟬⇘𝒮⇙ ∈ S" using S_zero_closed 𝒮_def by auto
show "𝟭⇘𝒮⇙ ∈ S" using S_one_closed 𝒮_def by auto
fix x' assume x': "x' ∈ S"
let ?x = "ψ x'"
have x: "?x ∈ ?R" "φ ?x = x'"
using x' apply (metis ψ_def bij_betw_apply bij_betw_the_inv_into phi_bij)
using x' by auto
show "𝟭⇘𝒮⇙ ⊗⇘𝒮⇙ x' = x'" "x' ⊗⇘𝒮⇙ 𝟭⇘𝒮⇙ = x'" "x' ⊕⇘𝒮⇙ 𝟬⇘𝒮⇙ = x'" "𝟬⇘𝒮⇙ ⊕⇘𝒮⇙ x' = x'"
using φ_in_ring_hom by (simp_all add: 𝒮_def x)
show "∃mx'∈S. mx' ⊕⇘𝒮⇙ x' = 𝟬⇘𝒮⇙ ∧ x' ⊕⇘𝒮⇙ mx' = 𝟬⇘𝒮⇙"
proof (rule, rule)
let ?mx' = "φ (⊖⇘ℛ⇙ ?x)"
show "?mx' ∈ S" using bij_betwE phi_bij x(1) by blast
show "?mx' ⊕⇘𝒮⇙ x' = 𝟬⇘𝒮⇙" "x' ⊕⇘𝒮⇙ ?mx' = 𝟬⇘𝒮⇙"
by (simp_all add: 𝒮_def R_.l_neg R_.r_neg x(1))
qed
fix y' assume y': "y' ∈ S"
let ?y = "ψ y'"
have y: "?y ∈ ?R" "φ ?y = y'"
using y' apply (metis ψ_def bij_betw_apply bij_betw_the_inv_into phi_bij)
using y' by auto
show "x' ⊗⇘𝒮⇙ y' ∈ S" using 𝒮_def S_mult_closed by (simp add: x' y')
show "x' ⊕⇘𝒮⇙ y' ∈ S" using 𝒮_def S_add_closed by (simp add: x' y')
show "x' ⊕⇘𝒮⇙ y' = y' ⊕⇘𝒮⇙ x'"
using φ_in_ring_hom by (simp add: R_.add.m_comm 𝒮_def x(1) y(1))
fix z' assume z': "z' ∈ S"
let ?z = "ψ z'"
have z: "?z ∈ ?R" "φ ?z = z'"
using z' apply (metis ψ_def bij_betw_apply bij_betw_the_inv_into phi_bij)
using z' by auto
show "(x' ⊕⇘𝒮⇙ y') ⊕⇘𝒮⇙ z' = x' ⊕⇘𝒮⇙ (y' ⊕⇘𝒮⇙ z')"
using φ_in_ring_hom 𝒮_def x(1) y(1) z(1) R_.add.m_assoc by simp
show "x' ⊗⇘𝒮⇙ y' ⊗⇘𝒮⇙ z' = x' ⊗⇘𝒮⇙ (y' ⊗⇘𝒮⇙ z')"
using φ_in_ring_hom 𝒮_def x(1) y(1) z(1) R_.m_assoc by simp
show "(x' ⊕⇘𝒮⇙ y') ⊗⇘𝒮⇙ z' = x' ⊗⇘𝒮⇙ z' ⊕⇘𝒮⇙ y' ⊗⇘𝒮⇙ z'"
using ring_hom_add[OF φ_in_ring_hom] ring_hom_mult[OF φ_in_ring_hom] x y z
R_.l_distr R_.add.m_closed R_.m_closed
by (smt (verit, ccfv_threshold))
show "z' ⊗⇘𝒮⇙ (x' ⊕⇘𝒮⇙ y') = z' ⊗⇘𝒮⇙ x' ⊕⇘𝒮⇙ z' ⊗⇘𝒮⇙ y'"
using ring_hom_add[OF φ_in_ring_hom] ring_hom_mult[OF φ_in_ring_hom] x y z
R_.r_distr R_.add.m_closed R_.m_closed
by (smt (verit, ccfv_threshold))
qed
show ?thesis using that[of 𝒮, OF 𝒮_.ring_axioms carrier_S φ_in_ring_hom] .
qed
end
Related
I'm trying to add some small improvements to Jacobson_Basic_Algebra.
ORIGINAL DEFINITION:
Their definition of monoid isomorphism is as follows.
locale monoid_isomorphism =
bijective_map η M M' + source: monoid M "(⋅)" 𝟭 + target: monoid M' "(⋅')" "𝟭'"
for η and M and composition (infixl "⋅" 70) and unit ("𝟭")
and M' and composition' (infixl "⋅''" 70) and unit' ("𝟭''") +
assumes commutes_with_composition: "⟦ x ∈ M; y ∈ M ⟧ ⟹ η x ⋅' η y = η (x ⋅ y)"
and commutes_with_unit: "η 𝟭 = 𝟭'"
and then they have a theorem that the inverse mapping is also an isomorphism
theorem inverse_monoid_isomorphism:
"monoid_isomorphism (restrict (inv_into M η) M') M' (⋅') 𝟭' M (⋅) 𝟭"
using commutes_with_composition commutes_with_unit surjective
by unfold_locales auto
MY DEFINITION 1:
So I added my improvement by splitting the definition into two parts. First I define morphism
as a function that satisfies f (a b) = f(a) f(b).
locale monoid_morphism = (* This is like homomorphism but lacks the commutes_with_unit axiom *)
map η M M'+ source: monoid M "(⋅)" 𝟭 + target: monoid M' "(⋅')" "𝟭'"
for η and M and composition (infixl "⋅" 70) and unit ("𝟭")
and M' and composition' (infixl "⋅''" 70) and unit' ("𝟭''") +
assumes commutes_with_composition: "⟦ x ∈ M; y ∈ M ⟧ ⟹ η (x ⋅ y) = η x ⋅' η y"
then I define isomorphism as a morphism that us bijective
locale monoid_isomorphism = monoid_morphism + bijective_map η M M'
and then I prove that neutral element must map to neutral element f(1)=1
begin monoid_isomorphism context
theorem commutes_with_unit: "η 𝟭 = 𝟭'"
proof -
{
fix y assume "y ∈ M'"
then obtain x where nxy:"η x = y" "x ∈ M" by (metis image_iff surjective)
then have "η x ⋅' η 𝟭 = η x" using commutes_with_composition[symmetric] by auto
then have "y ⋅' η 𝟭 = y" using nxy by auto
}
then show "η 𝟭 = 𝟭'" by fastforce
qed
end
so in this way, I can remove the superfluous axiom from the definition and make it into a theorem instead.
THE PROBLEM:
So all in all, the two locales are not only equivalent but actually look exactly the same from outside. But somehow the inverse_monoid_isomorphism proof fails now.
theorem inverse_monoid_isomorphism:
"monoid_isomorphism (restrict (inv_into M η) M') M' (⋅') 𝟭' M (⋅) 𝟭"
using commutes_with_composition commutes_with_unit surjective
apply(unfold_locales)
apply(auto)
yields
goal (1 subgoal):
1. ⋀xa xb.
(⋀x y. x ∈ M ⟹ y ∈ M ⟹ η (x ⋅ y) = η x ⋅' η y) ⟹
𝟭' = η 𝟭 ⟹ M' = η ` M ⟹ xa ∈ M ⟹ xb ∈ M ⟹ inv_into M η (η xa ⋅' η xb) = xa ⋅ xb
I tried to see what happens if I change surjective to bijective (among using requirements) and then I get slightly more simplified end result
goal (1 subgoal):
1. ⋀x y. (⋀x y. x ∈ M ⟹ y ∈ M ⟹ η (x ⋅ y) = η x ⋅' η y) ⟹
x ∈ M' ⟹
y ∈ M' ⟹ 𝟭' = η 𝟭 ⟹ inv_into M η (x ⋅' y) = inv_into M η x ⋅ inv_into M η y
But in the end auto can't do it. Interestingly, when I use bijective then the proof doesn't work even with the original definition of isomorphism.
MY DEFINITION 2:
I also get the same outcome if I define
locale monoid_homomorphism = monoid_morphism η M "(⋅)" 𝟭 M' "(⋅')" "𝟭'"
for η and M and composition (infixl "⋅" 70) and unit ("𝟭")
and M' and composition' (infixl "⋅''" 70) and unit' ("𝟭''") +
assumes commutes_with_unit: "η 𝟭 = 𝟭'"
locale monoid_isomorphism = bijective_map η M M' + monoid_homomorphism
MY DEFINITION 3:
It also doesn't work if I just split the definition into
locale monoid_homomorphism =
source: monoid M "(⋅)" 𝟭 + target: monoid M' "(⋅')" "𝟭'"
for η and M and composition (infixl "⋅" 70) and unit ("𝟭")
and M' and composition' (infixl "⋅''" 70) and unit' ("𝟭''") +
assumes commutes_with_composition: "⟦ x ∈ M; y ∈ M ⟧ ⟹ η x ⋅' η y = η (x ⋅ y)"
and commutes_with_unit: "η 𝟭 = 𝟭'"
text ‹Def 1.3›
text ‹p 37, ll 7--11›
locale monoid_isomorphism = bijective_map η M M' + monoid_homomorphism
Now it is not only logically equivalent, but it's actually syntactically equivalent if you just paste monoid_homomorphism into the definition of monoid_isomorphism (which I tried to do and it works).
locale monoid_isomorphism
fixes η :: "'a ⇒ 'b"
and M :: "'a set"
and composition :: "'a ⇒ 'a ⇒ 'a" (infixl ‹⋅› 70)
and unit :: "'a" (‹𝟭›)
and M' :: "'b set"
and composition' :: "'b ⇒ 'b ⇒ 'b" (infixl ‹⋅''› 70)
and unit' :: "'b" (‹𝟭''›)
assumes "monoid_isomorphism η M (⋅) 𝟭 M' (⋅') 𝟭'"
I tried to use Query > Print Context tab with all boxes ticked and the resulting contexts for the two definitions of this locale are exactly the same (syntactically).
I can't understand why such seemingly benign change would completely derail a proof.
I have uploaded the full code to
https://github.com/aleksander-mendoza/Isabelle
The answer turned out to be very simple. One definition was
assumes commutes_with_composition: "⟦ x ∈ M; y ∈ M ⟧ ⟹ η (x ⋅ y) = η x ⋅' η y"
while the other was
assumes commutes_with_composition: "⟦ x ∈ M; y ∈ M ⟧ ⟹ η x ⋅' η y = η (x ⋅ y)"
This has an important impact on auto and simp because they always try to match left side and replace the matched terms with right side.
In order to make the proof work it was enough to add [symmetric]
using commutes_with_composition[symmetric] commutes_with_unit surjective
I'm formalizing this article in Isabelle. In section 4.1 it describes the following setting:
context
fixes c d :: real
assumes "c ≠ 0" "∃ b. c = b^2" "∃ b'. d = b'^2"
begin
definition t where "t = sqrt(d/c)"
definition e' where "e' x y = x^2 + y^2 - 1 - t^2 * x^2 * y^2"
definition ρ where "ρ x y = (-y,x)"
definition τ where "τ x y = (1/(t*x),1/(t*y))"
It then defines G to be the abelian group of order eight generated by ρ and τ.
Is there an easy way of:
Stating that ρ and τ generate a group.
Since ρ and τ have order 2 and commute I think that all the rest commute and maybe there is a built-in theorem saying that this has to correspond to an abelian group of order 8?
I did make an attempt to solve the problem and came up with a slightly forceful method for its solution:
context
fixes c d :: real
assumes "c ≠ 0" "∃b. c = b^2" "∃b'. d = b'^2"
begin
definition t where "t = sqrt(d/c)"
definition e' where "e' x y = x^2 + y^2 - 1 - t^2 * x^2 * y^2"
context
assumes nz_t: "t ≠ 0"
begin
definition ρ :: "real × real ⇒ real × real" where
"ρ z = (-snd z, fst z)"
definition τ :: "real × real ⇒ real × real" where
"τ z = (1/(t*fst z), 1/(t*snd z))"
definition S where
"S ≡
{
id,
(λz. (-snd z, fst z)),
(λz. (-fst z, -snd z)),
(λz. (snd z, -fst z)),
(λz. (1/(t*fst z), 1/(t*snd z))),
(λz. (-1/(t*snd z), 1/(t*fst z))),
(λz. (-1/(t*fst z), -1/(t*snd z))),
(λz. (1/(t*snd z), -1/(t*fst z)))
}"
definition ρS where
"ρS ≡
{id, (λz. (-snd z, fst z)), (λz. (-fst z, -snd z)), (λz. (snd z, -fst z))}"
definition τS where
"τS ≡ {id, (λz. (1/(t*fst z), 1/(t*snd z)))}"
definition BIJ where "BIJ = ⦇carrier = {f. bij f}, mult = comp, one = id⦈"
interpretation bij: group BIJ
unfolding BIJ_def
apply unfold_locales
subgoal by (simp add: bij_comp)
subgoal by (simp add: comp_assoc)
subgoal by simp
subgoal by simp
subgoal by simp
subgoal
unfolding Units_def
by clarsimp
(metis inj_iff bij_betw_def bij_betw_inv_into inv_o_cancel surj_iff)
done
(*the proof may take quite a few seconds*)
lemma comp_S: "x ∈ S ⟹ y ∈ S ⟹ x ∘ y ∈ S"
unfolding comp_apply S_def Set.insert_iff by (elim disjE) fastforce+
lemma comm_S: "x ∈ S ⟹ y ∈ S ⟹ x ∘ y = y ∘ x"
unfolding comp_apply S_def Set.insert_iff by (elim disjE) fastforce+
lemma bij_ρ: "bij ρ"
unfolding bij_def inj_def surj_def ρ_def
by clarsimp (metis add.inverse_inverse)
lemma bij_τ: "bij τ"
unfolding bij_def inj_def surj_def τ_def
proof(simp add: nz_t, intro allI, intro exI)
fix a show "a = 1 / (t * (1/(a*t)))" using nz_t by simp
qed
lemma generate_ρτ: "generate BIJ {ρ, τ} = S"
proof(standard; intro subsetI)
have inv_τ: "inv⇘BIJ⇙ τ = τ"
unfolding m_inv_def
proof(standard)
show "τ ∈ carrier BIJ ∧ τ ⊗⇘BIJ⇙ τ = 𝟭⇘BIJ⇙ ∧ τ ⊗⇘BIJ⇙ τ = 𝟭⇘BIJ⇙"
unfolding BIJ_def apply(intro conjI)
subgoal using bij_τ by simp
subgoal unfolding τ_def using nz_t by auto
subgoal unfolding τ_def using nz_t by auto
done
then show
"y ∈ carrier BIJ ∧ τ ⊗⇘BIJ⇙ y = 𝟭⇘BIJ⇙ ∧ y ⊗⇘BIJ⇙ τ = 𝟭⇘BIJ⇙ ⟹ y = τ"
for y
unfolding BIJ_def by (auto intro: left_right_inverse_eq)
qed
define ρ' :: "real × real ⇒ real × real" where "ρ' = (λz. (snd z, -fst z))"
have bij_ρ': "bij ρ'"
unfolding bij_def inj_def surj_def ρ'_def
by simp (metis add.inverse_inverse)
have inv_ρ: "inv⇘BIJ⇙ ρ = ρ'"
unfolding m_inv_def
proof(standard)
show "ρ' ∈ carrier BIJ ∧ ρ ⊗⇘BIJ⇙ ρ' = 𝟭⇘BIJ⇙ ∧ ρ' ⊗⇘BIJ⇙ ρ = 𝟭⇘BIJ⇙"
unfolding BIJ_def apply(intro conjI)
subgoal using bij_ρ' by auto
subgoal unfolding ρ_def ρ'_def by auto
subgoal unfolding ρ_def ρ'_def by auto
done
then show
"y ∈ carrier BIJ ∧ ρ ⊗⇘BIJ⇙ y = 𝟭⇘BIJ⇙ ∧ y ⊗⇘BIJ⇙ ρ = 𝟭⇘BIJ⇙ ⟹ y = ρ'"
for y
unfolding BIJ_def by (auto intro: left_right_inverse_eq)
qed
have ττ: "τ ⊗⇘BIJ⇙ τ = 𝟭⇘BIJ⇙"
unfolding BIJ_def τ_def comp_def by (auto simp: nz_t)
show "x ∈ generate BIJ {ρ, τ} ⟹ x ∈ S" for x
apply(induction rule: generate.induct)
subgoal unfolding BIJ_def S_def by auto
subgoal unfolding BIJ_def S_def ρ_def τ_def by auto
subgoal
unfolding Set.insert_iff apply(elim disjE)
subgoal using inv_ρ unfolding BIJ_def S_def ρ_def ρ'_def by simp
subgoal using inv_τ unfolding BIJ_def S_def τ_def by simp
subgoal by simp
done
subgoal unfolding BIJ_def by (metis monoid.select_convs(1) comp_S)
done
show "x ∈ S ⟹ x ∈ generate BIJ {ρ, τ}" for x
unfolding S_def Set.insert_iff
proof(elim disjE; clarsimp)
show "id ∈ generate BIJ {ρ, τ}"
unfolding BIJ_def using generate.simps by fastforce
show ρ_gen: "(λz. (- snd z, fst z)) ∈ generate BIJ {ρ, τ}"
by (fold ρ_def, rule generate.simps[THEN iffD2]) simp
show τ_gen: "(λz. (1 / (t * fst z), 1 / (t * snd z))) ∈ generate BIJ {ρ, τ}"
by (fold τ_def) (simp add: generate.incl)
from inv_ρ show inv_ρ_gen: "(λz. (snd z, - fst z)) ∈ generate BIJ {ρ, τ}"
by (fold ρ'_def) (auto simp: generate.inv insertI1)
show ρρ_gen: "(λz. (- fst z, - snd z)) ∈ generate BIJ {ρ, τ}"
proof-
have ρρ: "(λz. (- fst z, - snd z)) = ρ ⊗⇘BIJ⇙ ρ"
unfolding ρ_def BIJ_def by auto
show ?thesis
apply(rule generate.simps[THEN iffD2])
using ρρ ρ_gen[folded ρ_def] by auto
qed
show "(λz. (- (1 / (t * snd z)), 1 / (t * fst z))) ∈ generate BIJ {ρ, τ}"
proof-
have ρτ: "(λz. (- (1 / (t * snd z)), 1 / (t * fst z))) = ρ ⊗⇘BIJ⇙ τ"
unfolding ρ_def τ_def BIJ_def by auto
show ?thesis
apply(rule generate.simps[THEN iffD2])
using ρτ ρ_gen[folded ρ_def] τ_gen[folded τ_def] by auto
qed
show
"(λz. (- (1 / (t * fst z)), - (1 / (t * snd z)))) ∈ generate BIJ {ρ, τ}"
proof-
have ρρτ:
"(λz. (- (1 / (t * fst z)), - (1 / (t * snd z)))) =
(λz. (- fst z, - snd z)) ⊗⇘BIJ⇙ τ"
unfolding τ_def BIJ_def by auto
show ?thesis
apply(rule generate.simps[THEN iffD2])
using ρρτ ρρ_gen τ_gen[folded τ_def] by auto
qed
show "(λz. (1 / (t * snd z), - (1 / (t * fst z)))) ∈ generate BIJ {ρ, τ}"
proof-
have inv_ρ_τ:
"(λz. (1 / (t * snd z), - (1 / (t * fst z)))) =
(λz. (snd z, - fst z)) ⊗⇘BIJ⇙ τ"
unfolding τ_def BIJ_def by auto
show ?thesis
apply(rule generate.simps[THEN iffD2])
using inv_ρ_τ inv_ρ_gen τ_gen[folded τ_def] by auto
qed
qed
qed
lemma "comm_group (BIJ⦇carrier := (generate BIJ {ρ, τ})⦈)"
proof-
have ρτ_ss_BIJ: "{ρ, τ} ⊆ carrier BIJ"
using bij_ρ bij_τ unfolding BIJ_def by simp
interpret ρτ_sg: subgroup "(generate BIJ {ρ, τ})" BIJ
using ρτ_ss_BIJ by (rule bij.generate_is_subgroup)
interpret ρτ_g: group "BIJ⦇carrier := (generate BIJ {ρ, τ})⦈"
by (rule ρτ_sg.subgroup_is_group[OF bij.group_axioms])
have car_S: "carrier (BIJ⦇carrier := S⦈) = S" by simp
have BIJ_comp: "x ⊗⇘BIJ⦇carrier := S⦈⇙ y = x ∘ y" for x y
unfolding BIJ_def by auto
from ρτ_g.group_comm_groupI[
unfolded generate_ρτ car_S BIJ_comp, OF comm_S, simplified
]
show ?thesis unfolding generate_ρτ by assumption
qed
lemma id_pair_def: "(λx. x) = (λz. (fst z, snd z))" by simp
lemma distinct_single: "distinct [x] = True" by simp
lemma ne_ff'_gg'_imp_ne_fgf'g':
assumes "f ≠ f' ∨ g ≠ g'"
shows
"(λz. (f (fst z) (snd z), g (fst z) (snd z))) ≠
(λz. (f' (fst z) (snd z), g' (fst z) (snd z)))"
using assms
proof(rule disjE)
assume "f ≠ f'"
then obtain x y where "f x y ≠ f' x y" by blast
then show ?thesis by (metis (hide_lams) fst_eqD snd_eqD)
next
assume "g ≠ g'"
then obtain x y where "g x y ≠ g' x y" by blast
then show ?thesis by (metis (hide_lams) fst_eqD snd_eqD)
qed
lemma id_ne_hyp: "(λa. a) ≠ (λa. 1/(t*a))"
proof(rule ccontr, simp)
assume id_eq_hyp: "(λa. a) = (λa. 1/(t*a))"
{
fix a :: real assume "a > 0"
define b where "b = sqrt(a)"
from ‹a > 0› have "a = b*b" and "b > 0" unfolding b_def by auto
from id_eq_hyp have "b = 1/(t*b)" by metis
with ‹b > 0› have "b div b =(1/(t*b)) div b" by simp
with ‹b > 0› have "1 = (1/(t*a))" unfolding ‹a = b*b› by simp
with ‹a > 0› nz_t have "t*a = 1" by simp
}
note ta_eq_one = this
define t2 where "t2 = (if t > 0 then 2/t else -2/t)"
with nz_t have "t2 > 0" unfolding t2_def by auto
from nz_t have "t*t2 = 2 ∨ t*t2 = -2" unfolding t2_def by auto
from ta_eq_one ‹t2 > 0› this show False by auto
qed
lemma id_ne_mhyp: "(λa. a) ≠ (λa. -1/(t*a))"
proof(rule ccontr, simp)
assume id_eq_hyp: "(λa. a) = (λa. -(1/(t*a)))"
{
fix a :: real assume "a > 0"
define b where "b = sqrt(a)"
from ‹a > 0› have "a = b*b" and "b > 0" unfolding b_def by auto
from id_eq_hyp have "b = -(1/(t*b))" by metis
with ‹b > 0› have "b div b =-1/(t*b) div b" by simp
with ‹b > 0› have "1 = -1/(t*a)" unfolding ‹a = b*b› by simp
with ‹a > 0› nz_t have "t*a = -1" by (metis divide_eq_1_iff)
}
note ta_eq_one = this
define t2 where "t2 = (if t > 0 then 2/t else -2/t)"
with nz_t have "t2 > 0" unfolding t2_def by auto
from nz_t have "t*t2 = 2 ∨ t*t2 = -2" unfolding t2_def by auto
from ta_eq_one ‹t2 > 0› this show False by auto
qed
lemma mid_ne_hyp: "(λa. -a) ≠ (λa. 1 / (t*a))"
using id_ne_mhyp by (metis minus_divide_left minus_equation_iff)
lemma mid_ne_mhyp: "(λa. -a) ≠ (λa. -1 / (t*a))"
using id_ne_hyp by (metis divide_minus_left minus_equation_iff)
lemma hyp_neq_hyp_1: "(λa. - 1/(t*a)) ≠ (λa. 1/(t*a))"
using nz_t
by (metis divide_cancel_right id_ne_mhyp mult_cancel_right1 mult_left_cancel
one_neq_neg_one)
lemma distinct:
"distinct
[
id,
(λz. (-snd z, fst z)),
(λz. (-fst z, -snd z)),
(λz. (snd z, -fst z)),
(λz. (1/(t*fst z), 1/(t*snd z))),
(λz. (-1/(t*snd z), 1/(t*fst z))),
(λz. (-1/(t*fst z), -1/(t*snd z))),
(λz. (1/(t*snd z), -1/(t*fst z)))
]"
apply(unfold distinct_length_2_or_more)+
unfolding
distinct_length_2_or_more
distinct_single
id_def id_pair_def
HOL.simp_thms(21)
by
(intro conjI)
(
rule ne_ff'_gg'_imp_ne_fgf'g',
metis one_neq_neg_one id_ne_hyp id_ne_mhyp
mid_ne_hyp mid_ne_mhyp hyp_neq_hyp_1
)+
lemma "card S = 8"
using distinct unfolding S_def using card_empty card_insert_disjoint by auto
end
end
Remarks
I relied on sledgehammer for many parts of the proofs and there is some unnecessary code duplication. Therefore, just like most of my answers on SO, this answer is far from perfect from the perspective of the coding style.
I would be interested to know if there is a better overall approach for the solution. Somehow, I came to believe that most of the more thoughtful approaches (e.g. using theorems about cyclic groups to determine the order of ρ and τ and then using |HK|=|H||K|/|H∩K| to determine the order of G) would require proving quite a number of additional theorems for HOL-Algebra, but I did not check with the AFP before making this remark and I do not use HOL-Algebra on a regular basis. Therefore, I may have missed something.
I'm trying to prove that a transitive relation on elements of lists is equivalent to a transitive relation on lists (under some conditions).
Here is a first lemma:
lemma list_all2_rtrancl1:
"(list_all2 P)⇧*⇧* xs ys ⟹
list_all2 P⇧*⇧* xs ys"
apply (induct rule: rtranclp_induct)
apply (simp add: list.rel_refl)
by (smt list_all2_trans rtranclp.rtrancl_into_rtrancl)
And here is a symmetric lemma:
lemma list_all2_rtrancl2:
"(⋀x. P x x) ⟹
list_all2 P⇧*⇧* xs ys ⟹
(list_all2 P)⇧*⇧* xs ys"
apply (erule list_all2_induct)
apply simp
I guess that a relation should be reflexive. But maybe I should use another assumptions. The lemma could be proven given the assumption that P is transitive, however P is not transitive. I'm stuck. Could you suggest what assumptions to choose and how to prove this lemma?
It seems that nitpick gives me a wrong counterexample for the specific case of the last lemma (xs = [0] and ys = [2]):
lemma list_all2_rtrancl2_example:
"list_all2 (λx y. x = y ∨ Suc x = y)⇧*⇧* xs ys ⟹
(list_all2 (λx y. x = y ∨ Suc x = y))⇧*⇧* xs ys"
nitpick
I can prove that the lemma holds for this example:
lemma list_all2_rtrancl2_example_0_2:
"list_all2 (λx y. x = y ∨ Suc x = y)⇧*⇧* [0] [2] ⟹
(list_all2 (λx y. x = y ∨ Suc x = y))⇧*⇧* [0] [2]"
apply (rule_tac ?b="[1]" in converse_rtranclp_into_rtranclp; simp)
apply (rule_tac ?b="[2]" in converse_rtranclp_into_rtranclp; simp)
done
It may be feasible to use listrel instead of list_all2. Indeed, as shown below, they are equivalent (see set_listrel_eq_list_all2). However, there are several theorems in the standard library about listrel that do not have their equivalents for list_all2.
lemma set_listrel_eq_list_all2:
"listrel {(x, y). r x y} = {(xs, ys). list_all2 r xs ys}"
using list_all2_conv_all_nth listrel_iff_nth by fastforce
lemma listrel_tclosure_1: "(listrel r)⇧* ⊆ listrel (r⇧*)"
by
(
simp add:
listrel_rtrancl_eq_rtrancl_listrel1
listrel_subset_rtrancl_listrel1
rtrancl_subset_rtrancl
)
lemma listrel_tclosure_2: "refl r ⟹ listrel (r⇧*) ⊆ (listrel r)⇧*"
by
(
simp add:
listrel1_subset_listrel
listrel_rtrancl_eq_rtrancl_listrel1
rtrancl_mono
)
context
includes lifting_syntax
begin
lemma listrel_list_all2_transfer[transfer_rule]:
"((=) ===> (=) ===> (=) ===> (=))
(λr xs ys. (xs, ys) ∈ listrel {(x, y). r x y}) list_all2"
unfolding rel_fun_def using set_listrel_eq_list_all2 listrel_iff_nth by blast
end
lemma list_all2_rtrancl_1:
"(list_all2 r)⇧*⇧* xs ys ⟹ list_all2 r⇧*⇧* xs ys"
proof transfer
fix r :: "'a ⇒ 'a ⇒ bool" and xs :: "'a list" and ys:: "'a list"
assume "(λxs ys. (xs, ys) ∈ listrel {(x, y). r x y})⇧*⇧* xs ys"
then have "(xs, ys) ∈ (listrel {(x, y). r x y})⇧*"
unfolding rtranclp_def rtrancl_def by auto
then have "(xs, ys) ∈ listrel ({(x, y). r x y}⇧*)"
using listrel_tclosure_1 by auto
then show "(xs, ys) ∈ listrel {(x, y). r⇧*⇧* x y}"
unfolding rtranclp_def rtrancl_def by auto
qed
lemma list_all2_rtrancl_2:
"reflp r ⟹ list_all2 r⇧*⇧* xs ys ⟹ (list_all2 r)⇧*⇧* xs ys"
proof transfer
fix r :: "'a ⇒ 'a ⇒ bool" and xs :: "'a list" and ys :: "'a list"
assume as_reflp: "reflp r" and p_in_lr: "(xs, ys) ∈ listrel {(x, y). r⇧*⇧* x y}"
from as_reflp have refl: "refl {(x, y). r x y}"
using reflp_refl_eq by fastforce
from p_in_lr have "(xs, ys) ∈ listrel ({(x, y). r x y}⇧*)"
unfolding rtranclp_def rtrancl_def by auto
with refl have "(xs, ys) ∈ (listrel {(x, y). r x y})⇧*"
using listrel_tclosure_2 by auto
then show "(λxs ys. (xs, ys) ∈ listrel {(x, y). r x y})⇧*⇧* xs ys"
unfolding rtranclp_def rtrancl_def by auto
qed
A direct proof for list_all2 is also provided (legacy):
list_all2_induct is applied to the lists; the base case is trivial. Thence, it remains to show that (L P)* x#xs y#ys if (L (P*)) xs ys, (L P)* xs ys and P* x y.
The idea is that it is possible to find zs (e.g. xs) such that (L P) xs zs and (L P)+ zs ys.
Then, given that P* x y and P x x, by induction based on the transitive properties of P*, (L P) x#xs y#zs. Therefore, also, (L P)* x#xs y#zs.
Also, given that (L P)+ zs ys and P y y, by induction, (L P)+ y#zs y#ys. Thus, also, (L P)* y#zs y#ys.
From 3 and 4 conclude (L P)* x#xs y#ys.
lemma list_all2_rtrancl2:
assumes as_r: "(⋀x. P x x)"
shows "(list_all2 P⇧*⇧*) xs ys ⟹ (list_all2 P)⇧*⇧* xs ys"
proof(induction rule: list_all2_induct)
case Nil then show ?case by simp
next
case (Cons x xs y ys) show ?case
proof -
from as_r have lp_xs_xs: "list_all2 P xs xs" by (rule list_all2_refl)
from Cons.hyps(1) have x_xs_y_zs: "(list_all2 P)⇧*⇧* (x#xs) (y#xs)"
proof(induction rule: rtranclp_induct)
case base then show ?case by simp
next
case (step y z) then show ?case
proof -
have rt_step_2: "(list_all2 P)⇧*⇧* (y#xs) (z#xs)"
by (rule r_into_rtranclp, rule list_all2_Cons[THEN iffD2])
(simp add: step.hyps(2) lp_xs_xs)
from step.IH rt_step_2 show ?thesis by (rule rtranclp_trans)
qed
qed
from Cons.IH have "(list_all2 P)⇧*⇧* (y#xs) (y#ys)"
proof(induction rule: rtranclp_induct)
case base then show ?case by simp
next
case (step ya za) show ?case
proof -
have rt_step_2: "(list_all2 P)⇧*⇧* (y#ya) (y#za)"
by (rule r_into_rtranclp, rule list_all2_Cons[THEN iffD2])
(simp add: step.hyps(2) as_r)
from step.IH rt_step_2 show ?thesis by (rule rtranclp_trans)
qed
qed
with x_xs_y_zs show ?thesis by simp
qed
qed
As a side note, in my view (I know very little about nitpick), nitpick should not provide invalid counterexamples without any warning. I believe, usually, when nitpick 'suspects' that a counterexample may be invalid it notifies the user that the example is 'potentially spurious'. It may be useful to submit a bug report if this issue has not been recorded elsewhere.
Isabelle version: Isabelle2020
I have a partially-defined operator (disj_union below) on sets that I would like to lift to a quotient type (natq). Morally, I think this should be ok, because it is always possible to find in the equivalence class some representative for which the operator is defined [*]. However, I cannot complete the proof that the lifted definition preserves the equivalence, because disj_union is only partially defined. In my theory file below, I propose one way I have found to define my disj_union operator, but I don't like it because it features lots of abs and Rep functions, and I think it would be hard to work with (right?).
What is a good way to define this kind of thing using quotients in Isabelle?
theory My_Theory imports
"~~/src/HOL/Library/Quotient_Set"
begin
(* A ∪-operator that is defined only on disjoint operands. *)
definition "X ∩ Y = {} ⟹ disj_union X Y ≡ X ∪ Y"
(* Two sets are equivalent if they have the same cardinality. *)
definition "card_eq X Y ≡ finite X ∧ finite Y ∧ card X = card Y"
(* Quotient sets of naturals by this equivalence. *)
quotient_type natq = "nat set" / partial: card_eq
proof (intro part_equivpI)
show "∃x. card_eq x x" by (metis card_eq_def finite.emptyI)
show "symp card_eq" by (metis card_eq_def symp_def)
show "transp card_eq" by (metis card_eq_def transp_def)
qed
(* I want to lift my disj_union operator to the natq type.
But I cannot complete the proof, because disj_union is
only partially defined. *)
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union"
oops
(* Here is another attempt to define natq_add. I think it
is correct, but it looks hard to prove things about,
because it uses abstraction and representation functions
explicitly. *)
definition natq_add :: "natq ⇒ natq ⇒ natq"
where "natq_add X Y ≡
let (X',Y') = SOME (X',Y').
X' ∈ Rep_natq X ∧ Y' ∈ Rep_natq Y ∧ X' ∩ Y' = {}
in abs_natq (disj_union X' Y')"
end
[*] This is a little bit like how capture-avoiding substitution is only defined on the condition that bound variables do not clash; a condition that can always be satisfied by renaming to another representative in the alpha-equivalence class.
What about something like this (just an idea):
definition disj_union' :: "nat set ⇒ nat set ⇒ nat set"
where "disj_union' X Y ≡
let (X',Y') = SOME (X',Y').
card_eq X' X ∧ card_eq Y' Y ∧ X' ∩ Y' = {}
in disj_union X' Y'"
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union'" oops
For the record, here is Ondřej's suggestion (well, a slight amendment thereof, in which only one of the operands is renamed, not both) carried out to completion...
(* A version of disj_union that is always defined. *)
definition disj_union' :: "nat set ⇒ nat set ⇒ nat set"
where "disj_union' X Y ≡
let Y' = SOME Y'.
card_eq Y' Y ∧ X ∩ Y' = {}
in disj_union X Y'"
(* Can always choose a natural that is not in a given finite subset of ℕ. *)
lemma nats_infinite:
fixes A :: "nat set"
assumes "finite A"
shows "∃x. x ∉ A"
proof (rule ccontr, simp)
assume "∀x. x ∈ A"
hence "A = UNIV" by fast
hence "finite UNIV" using assms by fast
thus False by fast
qed
(* Can always choose n naturals that are not in a given finite subset of ℕ. *)
lemma nat_renaming:
fixes x :: "nat set" and n :: nat
assumes "finite x"
shows "∃z'. finite z' ∧ card z' = n ∧ x ∩ z' = {}"
using assms
apply (induct n)
apply (intro exI[of _ "{}"], simp)
apply (clarsimp)
apply (rule_tac x="insert (SOME y. y ∉ x ∪ z') z'" in exI)
apply (intro conjI, simp)
apply (rule someI2_ex, rule nats_infinite, simp, simp)+
done
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union'"
apply (unfold disj_union'_def card_eq_def)
apply (rule someI2_ex, simp add: nat_renaming)
apply (rule someI2_ex, simp add: nat_renaming)
apply (metis card.union_inter_neutral disj_union_def empty_iff finite_Un)
done
I'm trying to prove the following simple theorem I've come up with, that:
A point is on the boundary iff any small enough ball around that point contains points both in S and out of S.
Below I've managed to do the forward direction but I'm stuck on the backwards direction.
Using the same approach fails on the last step, the goal is close but not quite there, and I'm not sure what to do here:
lemma frontier_ball: "x ∈ frontier S ⟷
(∃r>0. (∀δ>0. δ<r ⟶ ((ball x δ) ∩ S ≠ {} ∧ (ball x δ) ∩ -S ≠ {})))"
(is "?lhs = ?rhs")
proof
{
assume "?lhs"
hence "x ∉ interior S ∧ x ∉ interior (-S)" by (auto simp: frontier_def interior_complement)
hence "∀δ>0. ((ball x δ) ∩ S ≠ {} ∧ (ball x δ) ∩ -S ≠ {})" by (auto simp: mem_interior)
then have "?rhs" by (simp add: Orderings.no_top_class.gt_ex)
}
{
assume "¬?lhs"
hence "x ∈ interior S ∨ x ∈ interior (-S)" by (auto simp: frontier_def interior_complement)
hence "∃δ>0. ball x δ ∩ S = {} ∨ ball x δ ∩ -S = {}" by (auto simp: mem_interior)
then have "¬?rhs" by (simp add: subset_ball)
}
qed
I tried to tell isabelle how to obtain such a delta but it's stuck on the obtain step:
lemma frontier_ball: "x ∈ frontier S ⟷
(∃r>0. (∀δ>0. δ<r ⟶ ((ball x δ) ∩ S ≠ {} ∧ (ball x δ) ∩ -S ≠ {})))"
(is "?lhs = ?rhs")
proof
{
assume "?lhs"
hence "x ∉ interior S ∧ x ∉ interior (-S)" by (auto simp: frontier_def interior_complement)
hence "∀δ>0. ((ball x δ) ∩ S ≠ {} ∧ (ball x δ) ∩ -S ≠ {})" by (auto simp: mem_interior)
then have "?rhs" by (simp add: Orderings.no_top_class.gt_ex)
}
{
fix r::real
assume "¬?lhs ∧ r>0"
hence "x ∈ interior S ∨ x ∈ interior (-S)" by (auto simp: frontier_def interior_complement)
then obtain r2 where "r2>0" and "ball x r2 ∩ S = {} ∨ ball x r2 ∩ -S = {}" by (auto simp: mem_interior)
then obtain δ where "δ>0 ∧ δ<r ∧ δ<r2" by auto
}
qed
Any pointers would be great!
Well, you can just construct such a δ. If you have r > 0 and r2 > 0 you want some δ that fulfils 0 < δ ≤ r2 and 0 < δ < r, why not just use min r2 (r/2)? You can define δ to be that and then you can prove the properties you want:
def δ ≡ "min r2 (r/2)"
with r2 A have δ: "δ > 0" "δ < r" "δ ≤ r2" by auto
with r2 have δ': "ball x δ ∩ S = {} ∨ ball x r2 ∩ -S = {}" using subset_ball[OF δ(3)] by auto
Or, a bit more direct:
lemma frontier_ball: "(x :: 'a :: {metric_space}) ∈ frontier S ⟷
(∃r>0. (∀δ>0. δ<r ⟶ ((ball x δ) ∩ S ≠ {} ∧ (ball x δ) ∩ -S ≠ {})))"
(is "?lhs = ?rhs")
proof -
{
assume "?lhs"
hence "x ∉ interior S ∧ x ∉ interior (-S)" by (auto simp: frontier_def interior_complement)
hence "∀δ>0. ((ball x δ) ∩ S ≠ {} ∧ (ball x δ) ∩ -S ≠ {})" by (auto simp: mem_interior)
then have "?rhs" by (simp add: Orderings.no_top_class.gt_ex)
}
moreover
{
assume lhs: "¬?lhs"
{
fix r :: real assume r: "r > 0"
from lhs have "x ∈ interior S ∨ x ∈ interior (-S)"
by (auto simp: frontier_def interior_complement)
then obtain δ where "δ > 0" "ball x δ ∩ S = {} ∨ ball x δ ∩ -S = {}"
by (auto simp: mem_interior)
with r have "min δ (r/2) > 0" "min δ (r/2) < r"
"ball x (min δ (r/2)) ∩ S = {} ∨ ball x (min δ (r/2)) ∩ -S = {}" using subset_ball by auto
hence "∃δ>0. δ < r ∧ (ball x δ ∩ S = {} ∨ ball x δ ∩ -S = {})" by blast
}
hence "¬?rhs" by blast
}
ultimately show ?thesis by blast
qed
For the record, I would avoid doing things like assume "A ∧ B". Do assume "A" "B" instead. That gives you two facts that you can work with directly, instead of having them wrapped up with a HOL conjunction in one fact.