Efficient reasoning in modular arithmetic - isabelle

I decided to prove the following theorem:
theory Scratch
imports Main
begin
lemma "(3::int)^k mod 4 = 1 ⟷ even k"
proof (cases "even k")
case True
then obtain l where "2*l = k" by auto
then show ?thesis
using power_mult [of "(3::int)" 2 l]
and power_mod [of "(9::int)" 4 l] by auto
next
case False
then obtain l where "2*l + 1 = k" using odd_two_times_div_two_succ by blast
then have "(3::int)^k mod 4 = 3"
using power_mult [of "(3::int)" 2 l ]
and mod_mult_right_eq [of "(3::int)" "9^l" 4]
and power_mod [of "(9::int)" 4 l]
by auto
then show ?thesis using `odd k` by auto
qed
end
The proof is accepted by Isabelle, but to my taste, there is way too much trivial detail as to how calculations mod 4 are performed:
then have "(3::int)^k mod 4 = 3"
using power_mult [of "(3::int)" 2 l ]
and mod_mult_right_eq [of "(3::int)" "9^l" 4]
and power_mod [of "(9::int)" 4 l]
by auto
Apart from the application of power_mult, this is only application of various rules on what
parts of expressions may be safely reduced. Is there a proof method that can infer detail like this automatically?
(I'm also open to any other comments about my proof style - one thing that bothers me is the repetitive ::int)

Following some discussion on IRC and subsequent experimentation, I have learned the following:
Firstly, the a mod c = b mod c is somewhat verbose. HOL-Number_Theory.Cong defines the notation [a = b] (mod c), which is much more pleasant to use.
theory Scratch
imports
Main
"HOL-Number_Theory.Cong" (* or "HOL-Number_Theory.Number_Theory",
but that requires some more computation *)
begin
(note that this will make Isabelle compile the theory, which can take some time. You might want to run isabelle jedit -l HOL-Number_Theory, so that this happens only once.)
A one-line proof of the theorem will still require manually instantiating the relevant lemmas. However, it is usually a better idea to spell out the steps some more. This will allow the tactics to infer how the theorems should be instantiated, while leaving more useful detail for the human.
proof (cases "even k")
case True
then obtain l where "2*l = k" by auto
then have "[3^k = (3^2)^l] (mod 4)" (is "cong _ ... _")
by (auto simp add: power_mult)
also have "[... = (1::int)^l] (mod 4)" (is "cong _ ... _")
by (intro cong_pow) (simp add: cong_def)
also have "[... = 1] (mod 4)" by auto
finally have "[3^k = 1::int] (mod 4)".
thus ?thesis using `even k` by blast
next
case False
then obtain l where "2*l + 1 = k"
using oddE by blast
then have "[3^k = 3^(2*l+1)] (mod 4)" (is "cong _ ... _") by auto
also have "[... = (3^2)^l * 3] (mod 4)" (is "cong _ ... _")
by (metis power_mult power_add power_one_right cong_def)
also have "[... = (1::int)^l * 3] (mod 4)" (is "cong _ ... _")
by (intro cong_mult cong_pow) (auto simp add: cong_def)
also have "[... = 3] (mod 4)" by auto
finally have "[3^k ≠ 1::int] (mod 4)" by (auto simp add: cong_def)
then show ?thesis using `odd k` by blast
qed
This is a pretty standard calculational proof, where we use ... to refer to the RHS of the previous command. By default, it refers to the last argument of the top-level function application. In this case, that would be the modulus, so we override it with (is "cong _ ... _").
Also note that we can avoid a large part of the work by reusing minus_one_power_iff:
lemma "[3^k = 1::int] (mod 4) ⟷ even k"
proof -
have "[3^k = (-1::int)^k] (mod 4)"
by (intro cong_pow) (auto simp: cong_def)
thus ?thesis
by (auto simp: cong_def minus_one_power_iff)
qed

Related

Using `defines` with induction

Consider following lemma which should be easily provable:
lemma
fixes n m::nat
defines "m ≡ n - 1"
shows "m ≤ n"
proof(induction n)
case 0
then show ?case unfolding m_def
(* Why does «n» appear here? *)
next
case (Suc n)
then show ?case sorry
qed
However after unfolding m, the goal becomes n - 1 ≤ 0 instead of 0 - 1 ≤ 0 rendering the goal unprovable since n = 2 is a counterexample.
Is this a bug in Isabelle? How can I unfold the definition correctly?
I think a useful explanation could be the following: Recall the definition of nat.induct, namely
?P 0 ⟹ (⋀n. ?P n ⟹ ?P (Suc n)) ⟹ ?P ?n
and note that ?n means that n is implicitly universally quantified, that is, the previous definition is equivalent to
⋀n. ?P 0 ⟹ (⋀n. ?P n ⟹ ?P (Suc n)) ⟹ ?P n
Now, when applying nat.induct to your example, clearly the first subgoal to prove is ?P 0, i.e., m ≤ 0. However, in that context, n is still an arbitrary but fixed nat, in particular it does not hold that n = 0, and that is the reason why after unfolding the definition of m you get n - 1 ≤ 0 as the new subgoal. With respect to your specific question, the problem is that you cannot prove your result by induction on n (but you can easily prove it using unfolding m_def by simp).
As Javier pointed out, the n defined in the lemma head is different from the n created by induction. In other words, any facts from "outside" that reference n are not directly usable within the proof (induction n) environment.
However, Isabelle does offer a way to "inject" such facts, by piping them into induction:
lemma
fixes n m::nat
defines "m ≡ n - 1"
shows "m ≤ n"
using m_def (* this allows induction to use this fact *)
proof(induction n)
case 0
then show ?case by simp
next
case (Suc n)
then show ?case by simp
qed
using assms will work just as well in this case.
Note that direcly referring to m_def is no longer necessary, since a version of it is included for each case (in 0.hyps and Suc.hyps; use print_cases inside the proof for more information).

Using the type-to-sets approach for defining quotients

Isabelle has some automation for quotient reasoning through the quotient package. I would like to see if that automation is of any use for my example. The relevant definitions is:
definition e_proj where "e_proj = e'_aff_bit // gluing"
So I try to write:
typedef e_aff_t = e'_aff_bit
quotient_type e_proj_t = "e'_aff_bit" / "gluing
However, I get the error:
Extra type variables in representing set: "'a"
The error(s) above occurred in typedef "e_aff_t"
Because as Manuel Eberl explains here, we cannot have type definitions that depend on type parameters. In the past, I was suggested to use the type-to-sets approach.
How would that approach work in my example? Would it lead to more automation?
In the past, I was suggested to use the type-to-sets approach ...
The suggestion that was made in my previous answer was to use the standard set-based infrastructure for reasoning about quotients. I only mentioned that there exist other options for completeness.
I still believe that it is best not to use Types-To-Sets, provided that the definition of a quotient type is the only reason why you wish to use Types-To-Sets:
Even with Types-To-Sets, you will only be able to mimic the behavior of a quotient type in a local context with certain additional assumptions. Upon leaving the local context, the theorems that use locally defined quotient types would need to be converted to the set-based theorems that would inevitably rely on the standard set-based infrastructure for reasoning about quotients.
One would need to develop additional Isabelle/ML infrastructure before Local Typedef Rule can be used to define quotient types locally conveniently. It should not be too difficult to develop an infrastructure that is useable, but it would take some time to develop something that is universally applicable. Personally, I do not consider this application to be sufficiently important to invest my time in it.
In my view, it is only viable to use Types-To-Sets for the definition of quotient types locally if you are already using Types-To-Sets for its intended purpose in a given development. Then, the possibility of using the framework for the definition of quotient types locally can be seen as a 'value-added benefit'.
For completeness, I provide an example that I developed for an answer on the mailing list some time ago. Of course, this is merely the demonstration of the concept, not a solution that can be used for work that is meant to be published in some form. To make this useable, one would need to convert this development to an Isabelle/ML command that would take care of all the details automatically.
theory Scratch
imports Main
"HOL-Types_To_Sets.Prerequisites"
"HOL-Types_To_Sets.Types_To_Sets"
begin
locale local_typedef =
fixes R :: "['a, 'a] ⇒ bool"
assumes is_equivalence: "equivp R"
begin
(*The exposition subsumes some of the content of
HOL/Types_To_Sets/Examples/Prerequisites.thy*)
context
fixes S and s :: "'s itself"
defines S: "S ≡ {x. ∃u. x = {v. R u v}}"
assumes Ex_type_definition_S:
"∃(Rep::'s ⇒ 'a set) (Abs::'a set ⇒ 's). type_definition Rep Abs S"
begin
definition "rep = fst (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "Abs = snd (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "rep' a = (SOME x. a ∈ S ⟶ x ∈ a)"
definition "Abs' x = (SOME a. a ∈ S ∧ a = {v. R x v})"
definition "rep'' = rep' o rep"
definition "Abs'' = Abs o Abs'"
lemma type_definition_S: "type_definition rep Abs S"
unfolding Abs_def rep_def split_beta'
by (rule someI_ex) (use Ex_type_definition_S in auto)
lemma rep_in_S[simp]: "rep x ∈ S"
and rep_inverse[simp]: "Abs (rep x) = x"
and Abs_inverse[simp]: "y ∈ S ⟹ rep (Abs y) = y"
using type_definition_S
unfolding type_definition_def by auto
definition cr_S where "cr_S ≡ λs b. s = rep b"
lemmas Domainp_cr_S = type_definition_Domainp[OF type_definition_S
cr_S_def, transfer_domain_rule]
lemmas right_total_cr_S = typedef_right_total[OF type_definition_S
cr_S_def, transfer_rule]
and bi_unique_cr_S = typedef_bi_unique[OF type_definition_S cr_S_def,
transfer_rule]
and left_unique_cr_S = typedef_left_unique[OF type_definition_S cr_S_def,
transfer_rule]
and right_unique_cr_S = typedef_right_unique[OF type_definition_S
cr_S_def, transfer_rule]
lemma cr_S_rep[intro, simp]: "cr_S (rep a) a" by (simp add: cr_S_def)
lemma cr_S_Abs[intro, simp]: "a∈S ⟹ cr_S a (Abs a)" by (simp add: cr_S_def)
(* this part was sledgehammered - please do not pay attention to the
(absence of) proof style *)
lemma r1: "∀a. Abs'' (rep'' a) = a"
unfolding Abs''_def rep''_def comp_def
proof-
{
fix s'
note repS = rep_in_S[of s']
then have "∃x. x ∈ rep s'" using S equivp_reflp is_equivalence by force
then have "rep' (rep s') ∈ rep s'"
using repS unfolding rep'_def by (metis verit_sko_ex')
moreover with is_equivalence repS have "rep s' = {v. R (rep' (rep s'))
v}"
by (smt CollectD S equivp_def)
ultimately have arr: "Abs' (rep' (rep s')) = rep s'"
unfolding Abs'_def by (smt repS some_sym_eq_trivial verit_sko_ex')
have "Abs (Abs' (rep' (rep s'))) = s'" unfolding arr by (rule
rep_inverse)
}
then show "∀a. Abs (Abs' (rep' (rep a))) = a" by auto
qed
lemma r2: "∀a. R (rep'' a) (rep'' a)"
unfolding rep''_def rep'_def
using is_equivalence unfolding equivp_def by blast
lemma r3: "∀r s. R r s = (R r r ∧ R s s ∧ Abs'' r = Abs'' s)"
apply(intro allI)
apply standard
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def by auto
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def
by (smt Abs''_def Abs'_def CollectD S comp_apply local.Abs_inverse
mem_Collect_eq someI_ex)
done
definition cr_Q where "cr_Q = (λx y. R x x ∧ Abs'' x = y)"
lemma quotient_Q: "Quotient R Abs'' rep'' cr_Q"
unfolding Quotient_def
apply(intro conjI)
subgoal by (rule r1)
subgoal by (rule r2)
subgoal by (rule r3)
subgoal by (rule cr_Q_def)
done
(* instantiate the quotient lemmas from the theory Lifting *)
lemmas Q_Quotient_abs_rep = Quotient_abs_rep[OF quotient_Q]
(*...*)
(* prove the statements about the quotient type 's *)
(*...*)
(* transfer the results back to 'a using the capabilities of transfer -
not demonstrated in the example *)
lemma aa: "(a::'a) = (a::'a)"
by auto
end
thm aa[cancel_type_definition]
(* this shows {x. ∃u. x = {v. R u v}} ≠ {} ⟹ ?a = ?a *)
end

Proving theorem of the form ~pvq using Induction in Isabelle

I have a function called feedback which calculates the power of 3 (i.e)
feedback(t) = 3^t
primrec feedback :: "nat ⇒ nat" where
"feedback 0 = Suc(0)"|
"feedback (Suc t) = (feedback t)*3"
I want to prove that
if t > 5 then feedback(t) > 200
using Induction
lemma th2: "¬(t>5) ∨ ((feedback t) > 200)" (is "?H(t)" is "?P(t)∨?Q(t)" is "(?P(t))∨(?F(t) > 200)")
proof(induct t)
case 0 show "?P 0 ∨ ?Q 0" by simp
next
assume a:" ?F(t) > 200"
assume d: "?P(t) = False"
have b: "?F (Suc(t)) ≥ ?F(t)" by simp
from b and a have c: "?F(Suc(t)) > 200" by simp
from c have e: "?Q(Suc(t))" by simp
from d have f:"?P(Suc(t)) = False" by simp
from f and e have g: "?P(Suc(t))∨?Q(Suc(t))" by simp
from a and d and g have h: "?P(t)∨?Q(t) ⟹ ?P(Suc(t))∨?Q(Suc(t))" by simp
from a and d have "?H(Suc(t))" by simp
qed
First I prove that
feedback(t+1) >= feedback(t)
then assume feedback(t) > 200, so feedback(t+1)>200
Assume t>5
this implies (t+1) > 5
Also ~((t+1)>5) V (feedback (t+1) > 200) is True
Thus if P(t) is true then P(t+1) is true
But this is not working. I have no idea what the problem is
Well, first of all, you cannot simply assume arbitrary things in Isar. Or rather, you can do that, but you won't be able to show your goal after you've done that. The things that Isar allows you to assume are quite rigid; in your case, it's ¬ 5 < t ∨ 200 < feedback t.
I recommend using the case command, which assumes the right things for you. Then you can do a case distinction about that disjunction and then another one about whether t = 5:
lemma th2: "¬(t>5) ∨ ((feedback t) > 200)"
proof (induct t)
case 0
show ?case by simp
next
case (Suc t)
thus ?case
proof
assume "¬t > 5"
moreover have "feedback 6 = 729" by code_simp
-- ‹"simp add: eval_nat_numeral" would also work›
ultimately show ?thesis
by (cases "t = 5") auto
next
assume "feedback t > 200"
thus ?thesis by simp
qed
qed
Or, more compactly:
lemma th2: "¬(t>5) ∨ ((feedback t) > 200)"
proof (induct t)
case (Suc t)
moreover have "feedback 6 = 729" by code_simp
ultimately show ?case by (cases "t = 5") auto
qed simp_all
If your feedback function is actually monotonic, I would recommend proving that first, then the proof becomes a little less tedious.

Proving the cardinality of a more involved set

Supposing I have a set involving three conjunctions {k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2}.
How can I prove in Isabelle that the cardinality of this set is 1 ? (Namely only k=6 has gcd 3 6 = 2.) I.e., how can I prove lemma a_set : "card {k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = 1" ?
Using sledgehammer (or try) again doesn't yield results - I find it very difficult to find what exactly I need to give the proof methods to make them able to to the proof. (Even removing, e.g. gcd 3 k = 2, doesn't make it amenable to auto or sledgehammer.)
Your proposition is incorrect. The set you described is actually empty, as gcd 3 6 = 3. Sledgehammer can prove that the cardinality is zero without problems, although the resulting proof is again a bit ugly, as is often the case with Sledgehammer proofs:
lemma "card {k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = 0"
by (metis (mono_tags, lifting) card.empty coprime_Suc_nat
empty_Collect_eq eval_nat_numeral(3) gcd_nat.left_idem
numeral_One numeral_eq_iff semiring_norm(85))
Let's do it by hand, just to illustrate how to do it. These sorts of proofs do tend to get a little ugly, especially when you don't know the system well.
lemma "{k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = {}"
proof safe
fix x :: nat
assume "x > 2" "x ≤ 7" "gcd 3 x = 2"
from ‹x > 2› and ‹x ≤ 7› have "x = 3 ∨ x = 4 ∨ x = 5 ∨ x = 6 ∨ x = 7" by auto
with ‹gcd 3 x = 2› show "x ∈ {}" by (auto simp: gcd_non_0_nat)
qed
Another, much simpler way (but also perhaps more dubious one) would be to use eval. This uses the code generator as an oracle, i.e. it compiles the expression to ML code, compiles it, runs it, looks if the result is True, and then accepts this as a theorem without going through the Isabelle kernel like for normal proofs. One should think twice before using this, in my opinion, but for toy examples, it is perfectly all right:
lemma "card {k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = 0"
proof -
have "{k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = Set.filter (λk. gcd 3 k = 2) {2<..7}"
by (simp add: Set.filter_def)
also have "card … = 0" by eval
finally show ?thesis .
qed
Note that I had to massage the set a bit first (use Set.filter instead of the set comprehension) in order for eval to accept it. (Code generation can be a bit tricky)
UPDATE:
For the other statement from the comments, the proof has to look like this:
lemma "{k::nat. 0<k ∧ k ≤ 5 ∧ gcd 5 k = 1} = {1,2,3,4}"
proof (intro equalityI subsetI)
fix x :: nat
assume x: "x ∈ {k. 0 < k ∧ k ≤ 5 ∧ coprime 5 k}"
from x have "x = 1 ∨ x = 2 ∨ x = 3 ∨ x = 4 ∨ x = 5" by auto
with x show "x ∈ {1,2,3,4}" by (auto simp: gcd_non_0_nat)
qed (auto simp: gcd_non_0_nat)
The reason why this looks so different is because the right-hand side of the goal is no longer simply {}, so safe behaves differently and generates a pretty complicated mess of subgoals (just look at the proof state after the proof safe). With intro equalityI subsetI, we essentially just say that we want to prove that A = B by proving a ∈ A ⟹ a ∈ B and the other way round for arbitrary a. This is probably more robust than safe.

how to get isabelle to recognize an obvious conclusion

I'm trying to prove that the frontier, interior and exterior of a set are disjoint in isabelle. On the line I have marked '***', the fact that c \<inter> d = {} clearly follows from the previous line given the assumption at the start of the block, so how would I get isabelle to understand this?
theory Scratch
imports
"~~/src/HOL/Multivariate_Analysis/Topology_Euclidean_Space"
"~~/src/HOL/Probability/Sigma_Algebra"
begin
lemma boundary_disjoint: "disjoint {frontier S, interior S, interior (-S)}"
proof (rule disjointI)
fix c d assume sets:
"c \<in> {frontier S, interior S, interior (-S)}"
"d \<in> {frontier S, interior S, interior (-S)}"
and "c \<noteq> d"
show "c \<inter> d = {}"
proof cases
assume "c = frontier S \<and> d = interior S"
then show ?thesis using frontier_def by auto
next
assume "c = frontier S \<and> d = interior (-S)"
have "closure S \<inter> interior (-S) = {}" by (simp add: closure_interior)
hence "frontier S \<inter> interior (-S) = {}" using frontier_def by auto
*** then show ?thesis by auto
next
qed
qed
end
In Isar, you have to explicitly reference the facts you want to use. If you say that your goal follows from the previous line and the local assumption you made, you should give the assumption a name by writing assume A: "c = frontier S ∧ d = interior (-S)", and then you can prove your goal by with A have ?thesis by auto.
Why did I write have and not show? Well, there is another problem. You did a proof cases, but that uses the rule (P ⟹ Q) ⟹ (¬P ⟹ Q) ⟹ Q, i.e. it does a case distinction of the kind ‘Is P true or false?’. That is not what you want here.
One way to do your case distiction is by something like this:
from sets show "c ∩ d = {}"
proof (elim singletonE insertE)
insertE is an elimination rule for facts of the form x ∈ insert y A, and since {a,b,c} is just syntactic sugar for insert a (insert b (insert c A)), this is what you want. singletonE is similar, but specifically for x ∈ {y}; using singletonE instead of insertE means you do not get trivial cases with assumptions like x ∈ {}.
This gives you 9 cases, of which 3 are trivially solved by simp_all. The rest you have to prove yourself in Isar if you want to, but they can be solved quite easily by auto as well:
from sets and `c ≠ d` show "c ∩ d = {}"
by (auto simp: frontier_def closure_def interior_closure)

Resources