Simplifying if-then-else in summations or products - isabelle

While doing some basic algebra, I frequently arrive at a subgoal of the following type (sometimes with a finite sum, sometimes with a finite product).
lemma foo:
fixes N :: nat
fixes a :: "nat ⇒ nat"
shows "(a 0) = (∑x = 0..N. (if x = 0 then 1 else 0) * (a x))"
This seems pretty obvious to me, but neither auto nor auto cong: sum.cong split: if_splits can handle this. What's more, sledgehammer also surrenders when called on this lemma. How can one efficiently work with finite sums and products containing if-then-else in general, and how to approach this case in particular?

My favourite way to do these things (because it is very general) is to use the rules sum.mono_neutral_left and sum.mono_neutral_cong_left and the corresponding right versions (and analogously for products). The rule sum.mono_neutral_right lets you drop arbitrarily many summands if they are all zero:
finite T ⟹ S ⊆ T ⟹ ∀i∈T - S. g i = 0
⟹ sum g T = sum g S
The cong rule additionally allows you to modify the summation function on the now smaller set:
finite T ⟹ S ⊆ T ⟹ ∀i∈T - S. g i = 0 ⟹ (⋀x. x ∈ S ⟹ g x = h x)
⟹ sum g T = sum h S
With those, it looks like this:
lemma foo:
fixes N :: nat and a :: "nat ⇒ nat"
shows "a 0 = (∑x = 0..N. (if x = 0 then 1 else 0) * a x)"
proof -
have "(∑x = 0..N. (if x = 0 then 1 else 0) * a x) = (∑x ∈ {0}. a x)"
by (intro sum.mono_neutral_cong_right) auto
also have "… = a 0"
by simp
finally show ?thesis ..
qed

Assuming the left-hand side could use an arbitrary value between 0 and N, what about adding a more general lemma
lemma bar:
fixes N :: nat
fixes a :: "nat ⇒ nat"
assumes
"M ≤ N"
shows "a M = (∑x = 0..N. (if x = M then 1 else 0) * (a x))"
using assms by (induction N) force+
and solving the original one with using bar by blast?

Related

Why do I get this exception on an induction rule for a lemma?

I am trying to prove the following lemma (which is the meaning formula for the addition of two Binary numerals).
It goes like this :
lemma (in th2) addMeaningF_2: "∀m. m ≤ n ⟹ (m = (len x + len y) ⟹ (evalBinNum_1 (addBinNum x y) = plus (evalBinNum_1 x) (evalBinNum_1 y)))"
I am trying to perform strong induction. When I apply(induction n rule: less_induct) on the lemma, it throws an error.
exception THM 0 raised (line 755 of "drule.ML"):
infer_instantiate_types: type ?'a of variable ?a
cannot be unified with type 'b of term n
(⋀x. (⋀y. y < x ⟹ ?P y) ⟹ ?P x) ⟹ ?P ?a
Can anyone explain this?
Edit:
For more context
locale th2 = th1 +
fixes
plus :: "'a ⇒ 'a ⇒ 'a"
assumes
arith_1: "plus n zero = n"
and plus_suc: "plus n (suc m) = suc ( plus n m)"
len and evalBinNum_1 are both recursive functions
len gives us the length of a given binary numeral, while evalBinNum_1 evaluates binary numerals.
fun (in th2) evalBinNum_1 :: "BinNum ⇒ 'a"
where
"evalBinNum_1 Zero = zero"|
"evalBinNum_1 One = suc(zero)"|
"evalBinNum_1 (JoinZero x) = plus (evalBinNum_1 x) (evalBinNum_1 x)"|
"evalBinNum_1 (JoinOne x) = plus (plus (evalBinNum_1 x) (evalBinNum_1 x)) (suc zero)"
The problem is that Isabelle cannot infer the type of n (or the bound occurrence of m) when trying to use the induction rule less_induct. You might want to add a type annotation such as (n::nat) in your lemma. For the sake of generality, you might want to state that the type of n is an instance of the class wellorder, that is, (n::'a::wellorder). On another subject, I think there is a logical issue with your lemma statement: I guess you actually mean ∀m. m ≤ (n::nat) ⟶ ... ⟶ ... or, equivalently, ⋀m. m ≤ (n::nat) ⟹ ... ⟹ .... Finally, it would be good to know the context of your problem (e.g., there seems to be a locale th2 involved) for a more precise answer.

Isabelle structure proof

There is a set of some structures. I'm trying to prove that the cardinality of the set equals some number. Full theory is too long to post here. So here is a simplified one just to show the idea.
Let the objects (which I need to count) are sets containing natural numbers from 1 to n. The idea of the proof is as follows. I define a function which transforms sets to lists of 0 and 1. Here is the function and its inverse:
fun set_to_bitmap :: "nat set ⇒ nat ⇒ nat ⇒ nat list" where
"set_to_bitmap xs x 0 = []"
| "set_to_bitmap xs x (Suc n) =
(if x ∈ xs then Suc 0 else 0) # set_to_bitmap xs (Suc x) n"
fun bitmap_to_set :: "nat list ⇒ nat ⇒ nat set" where
"bitmap_to_set [] n = {}"
| "bitmap_to_set (x#xs) n =
(if x = Suc 0 then {n} else {}) ∪ bitmap_to_set xs (Suc n)"
value "set_to_bitmap {1,3,7,8} 1 8"
value "bitmap_to_set (set_to_bitmap {1,3,7,8} 1 8) 1"
Then I plan to prove that 1) a number of 0/1 lists with length n equals 2^^n,
2) the functions are bijections,
3) so the cardinality of the original set is 2^^n too.
Here are some auxiliary definitions and lemmas, which seems useful:
definition "valid_set xs n ≡ (∀a. a ∈ xs ⟶ 0 < a ∧ a ≤ n)"
definition "valid_bitmap ps n ≡ length ps = n ∧ set ps ⊆ {0, Suc 0}"
lemma length_set_to_bitmap:
"valid_set xs n ⟹
x = Suc 0 ⟹
length (set_to_bitmap xs x n) = n"
apply (induct xs x n rule: set_to_bitmap.induct)
apply simp
sorry
lemma bitmap_members:
"valid_set xs n ⟹
x = Suc 0 ⟹
set_to_bitmap xs x n = ps ⟹
set ps ⊆ {0, Suc 0}"
apply (induct xs x n arbitrary: ps rule: set_to_bitmap.induct)
apply simp
sorry
lemma valid_set_to_valid_bitmap:
"valid_set xs n ⟹
x = Suc 0 ⟹
set_to_bitmap xs x n = ps ⟹
valid_bitmap ps n"
unfolding valid_bitmap_def
using bitmap_members length_set_to_bitmap by auto
lemma valid_bitmap_to_valid_set:
"valid_bitmap ps n ⟹
x = Suc 0 ⟹
bitmap_to_set ps x = xs ⟹
valid_set xs n"
sorry
lemma set_to_bitmap_inj:
"valid_set xs n ⟹
valid_set xy n ⟹
x = Suc 0 ⟹
set_to_bitmap xs x n = ps ⟹
set_to_bitmap ys x n = qs ⟹
ps = qs ⟹
xs = ys"
sorry
lemma set_to_bitmap_surj:
"valid_bitmap ps n ⟹
x = Suc 0 ⟹
∃xs. set_to_bitmap xs x n = ps"
sorry
lemma bitmap_to_set_to_bitmap_id:
"valid_set xs n ⟹
x = Suc 0 ⟹
bitmap_to_set (set_to_bitmap xs x n) x = xs"
sorry
lemma set_to_bitmap_to_set_id:
"valid_bitmap ps n ⟹
x = Suc 0 ⟹
set_to_bitmap (bitmap_to_set ps x) x n = ps"
sorry
Here is a final lemma:
lemma valid_set_size:
"card {xs. valid_set xs n} = 2 ^^ n"
Does this approach seem valid? Are there any examples of such a proof? Could you suggest an idea on how to prove the lemmas? I'm stuck because the induction with set_to_bitmap.induct seems to be not applicable here.
In principle, that kind of approach does work: if you have a function f from a set A to a set B and an inverse function to it, you can prove bij_betw f A B (read: f is a bijection from A to B), and that then implies card A = card B.
However, there are a few comments that I have:
You should use bool lists instead of nat lists if you can only have 0 or 1 in them anyway.
It is usually better to use existing library functions than to define new ones yourself. Your two functions could be defined using library functions like this:
set_to_bitmap :: nat ⇒ nat ⇒ nat set ⇒ bool list
set_to_bitmap x n A = map (λi. i ∈ A) [x..<x+n]
bitmap_to_set :: nat ⇒ bool list ⇒ nat set
bitmap_to_set n xs = (λi. i + n) ` {i. i < length xs ∧ xs ! i}```
Side note: I would use upper-case letters for sets, not something like xs (which is usually used for lists).
Perhaps this is because you simplified your problem, but in its present form, valid_set A n is simply the same as A ⊆ {1..n} and the {A. valid_set A n} is simply Pow {1..n}. The cardinality of that is easy to show with results from the library:
lemma "card (Pow {1..(n::nat)}) = 2 ^ n"
by (simp add: card_Pow)`
As for your original questions: Your first few lemmas are provable, but for the induction to go through, you have to get rid of some of the unneeded assumptions first. The x = Suc 0 is the worst one – there is no way you can use induction if you have that as an assumption, because as soon as you do one induction step, you increase x by 1 and so you won't be able to apply your induction hypothesis. The following versions of your first three lemmas go through easily:
lemma length_set_to_bitmap:
"length (set_to_bitmap xs x n) = n"
by (induct xs x n rule: set_to_bitmap.induct) auto
lemma bitmap_members:
"set (set_to_bitmap xs x n) ⊆ {0, Suc 0}"
by (induct xs x n rule: set_to_bitmap.induct) auto
lemma valid_set_to_valid_bitmap: "valid_bitmap (set_to_bitmap xs x n) n"
unfolding valid_bitmap_def
using bitmap_members length_set_to_bitmap by auto
I also recommend not adding "abbreviations" like ps = set_to_bitmap xs x n as an assumption. It doesn't break anything, but it tends to complicate things needlessly.
The next lemma is a bit trickier. Due to your recursive definitions, you have to generalise the lemma first (valid_bitmap requires the set to be in the range from 1 to n, but once you make one induction step it has to be from 2 to n). The following works:
lemma valid_bitmap_to_valid_set_aux:
"bitmap_to_set ps x ⊆ {x..<x + length ps}"
by (induction ps x rule: bitmap_to_set.induct)
(auto simp: valid_bitmap_def valid_set_def)
lemma valid_bitmap_to_valid_set:
"valid_bitmap ps n ⟹ valid_set (bitmap_to_set ps 1) n"
using valid_bitmap_to_valid_set_aux unfolding valid_bitmap_def valid_set_def
by force
Injectivity and surjectivity (which is your ultimate goal) should follow from the fact that the two are inverse functions. Proving that will probably be doable with induction, but will require a few generalisations and auxiliary lemmas. It should be easier if you stick to the non-recursive definition using library functions that I sketched above.

Proving the cardinality of a more involved set

Supposing I have a set involving three conjunctions {k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2}.
How can I prove in Isabelle that the cardinality of this set is 1 ? (Namely only k=6 has gcd 3 6 = 2.) I.e., how can I prove lemma a_set : "card {k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = 1" ?
Using sledgehammer (or try) again doesn't yield results - I find it very difficult to find what exactly I need to give the proof methods to make them able to to the proof. (Even removing, e.g. gcd 3 k = 2, doesn't make it amenable to auto or sledgehammer.)
Your proposition is incorrect. The set you described is actually empty, as gcd 3 6 = 3. Sledgehammer can prove that the cardinality is zero without problems, although the resulting proof is again a bit ugly, as is often the case with Sledgehammer proofs:
lemma "card {k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = 0"
by (metis (mono_tags, lifting) card.empty coprime_Suc_nat
empty_Collect_eq eval_nat_numeral(3) gcd_nat.left_idem
numeral_One numeral_eq_iff semiring_norm(85))
Let's do it by hand, just to illustrate how to do it. These sorts of proofs do tend to get a little ugly, especially when you don't know the system well.
lemma "{k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = {}"
proof safe
fix x :: nat
assume "x > 2" "x ≤ 7" "gcd 3 x = 2"
from ‹x > 2› and ‹x ≤ 7› have "x = 3 ∨ x = 4 ∨ x = 5 ∨ x = 6 ∨ x = 7" by auto
with ‹gcd 3 x = 2› show "x ∈ {}" by (auto simp: gcd_non_0_nat)
qed
Another, much simpler way (but also perhaps more dubious one) would be to use eval. This uses the code generator as an oracle, i.e. it compiles the expression to ML code, compiles it, runs it, looks if the result is True, and then accepts this as a theorem without going through the Isabelle kernel like for normal proofs. One should think twice before using this, in my opinion, but for toy examples, it is perfectly all right:
lemma "card {k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = 0"
proof -
have "{k::nat. 2<k ∧ k ≤ 7 ∧ gcd 3 k = 2} = Set.filter (λk. gcd 3 k = 2) {2<..7}"
by (simp add: Set.filter_def)
also have "card … = 0" by eval
finally show ?thesis .
qed
Note that I had to massage the set a bit first (use Set.filter instead of the set comprehension) in order for eval to accept it. (Code generation can be a bit tricky)
UPDATE:
For the other statement from the comments, the proof has to look like this:
lemma "{k::nat. 0<k ∧ k ≤ 5 ∧ gcd 5 k = 1} = {1,2,3,4}"
proof (intro equalityI subsetI)
fix x :: nat
assume x: "x ∈ {k. 0 < k ∧ k ≤ 5 ∧ coprime 5 k}"
from x have "x = 1 ∨ x = 2 ∨ x = 3 ∨ x = 4 ∨ x = 5" by auto
with x show "x ∈ {1,2,3,4}" by (auto simp: gcd_non_0_nat)
qed (auto simp: gcd_non_0_nat)
The reason why this looks so different is because the right-hand side of the goal is no longer simply {}, so safe behaves differently and generates a pretty complicated mess of subgoals (just look at the proof state after the proof safe). With intro equalityI subsetI, we essentially just say that we want to prove that A = B by proving a ∈ A ⟹ a ∈ B and the other way round for arbitrary a. This is probably more robust than safe.

Free type variables in proof by induction

While trying to prove lemmas about functions in continuation-passing style by induction I have come across a problem with free type variables. In my induction hypothesis, the continuation is a schematic variable but its type involves a free type variable. As a result Isabelle is not able to unify the type variable with a concrete type when I try to apply the i.h. I have cooked up this minimal example:
fun add_k :: "nat ⇒ nat ⇒ (nat ⇒ 'a) ⇒ 'a" where
"add_k 0 m k = k m" |
"add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))"
lemma add_k_cps: "∀k. add_k n m k = k (add_k n m id)"
proof(rule, induction n)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m k = add_k n m (λn'. k (Suc n'))" by simp
also have "… = k (Suc (add_k n m id))"
using Suc[where k="(λn'. k (Suc n'))"] by metis
also have "… = k (add_k n m (λn'. Suc n'))"
using Suc[where k="(λn'. Suc n')"] sorry (* Type unification failed *)
also have "… = k (add_k (Suc n) m id)" by simp
finally show ?case .
qed
In the "sorry step", the explicit instantiation of the schematic variable ?k fails with
Type unification failed
Failed to meet type constraint:
Term: Suc :: nat ⇒ nat
Type: nat ⇒ 'a
since 'a is free and not schematic. Without the instantiation the simplifier fails anyway and I couldn't find other methods that would work.
Since I cannot quantify over types, I don't see any way how to make 'a schematic inside the proof. When a term variable becomes schematic locally inside a proof, why isn't this the case with variables in its type too? After the lemma has been proved, they become schematic at the theory level anyway. This seems quite limiting. Could an option to do this be implemented in the future or is there some inherent limitation? Alternatively, is there an approach to avoid this problem and still keeping the continuation schematically polymorphic in the proven lemma?
Free type variables become schematic in a theorem when the theorem is exported from the block in which the type variables have been fixed. In particular, you cannot quantify over type variables in a block and then instantiate the type variable within the block, as you are trying to do in your induction. Arbitrary quantification over types leads to inconsistencies in HOL, so there is little hope that this could be changed.
Fortunately, there is a way to prove your lemma in CPS style without type quantification. The problem is that your statement is not general enough, because it contains id. If you generalise it, then the proof works:
lemma add_k_cps: "add_k n m (k ∘ f) = k (add_k n m f)"
proof(induction n arbitrary: f)
case 0 show ?case by simp
next
case (Suc n)
have "add_k (Suc n) m (k ∘ f) = add_k n m (k ∘ (λn'. f (Suc n')))" by(simp add: o_def)
also have "… = k (add_k n m (λn'. f (Suc n')))"
using Suc.IH[where f="(λn'. f (Suc n'))"] by metis
also have "… = k (add_k (Suc n) m f)" by simp
finally show ?case .
qed
You get your original theorem back, if you choose f = id.
This is an inherent limitation how induction works in HOL. Induction is a rule in HOL, so it is not possible to generalize any types in the induction hypothesis.
A specialized solution for your problem is to first prove
lemma add_k_cps_nat: "add_k n m k = k (n + m)"
by (induction n arbitrary: m k) auto
and then prove add_k_cps.
A general approach is: prove instances for fixed types first, for which the induction works. In the example case is is an induction by nat. And then derive a proof generalized in the type itself.

How to use obtain in existential proofs?

I tried to prove an existential theorem
lemma "∃ x. x * (t :: nat) = t"
proof
obtain y where "y * t = t" by (auto)
but I could not finish the proof. So I have the necessary y but how can I feed it into the original goal?
Soundness of natural deduction requires that you get hold of the witness before you open the existential quantifier. This is why you are not allowed to use obtained variables in show statements. In your example, the proof step implicitly applies the rule exI. This turns the existentially quantified variable x into the schematic variable ?x, which can be instantiated later, but the instantiation may only refer to variables that have been in scope when ?x came into place. In the low-level proof state, obtained variables are meta-quantified (!!) and the instantiations for ?x can only refer to such variables that appear as a parameter to ?x.
Therefore, you have to switch the order in your proof:
lemma "∃ x. x * (t :: nat) = t"
proof - (* method - does not change the goal *)
obtain y where "y * t = t" by (auto)
then show ?thesis by(rule exI)
qed
You can give the witness (i.e. the element you want to put in for x) in the show clause:
lemma "∃ x. x * (t :: nat) = t"
proof
show "1*t = t" by simp
qed
Alternatively, when you already know the witness (1 or Suc 0 here), you can explicitly instantiate the rule exI to introduce the existential term:
lemma "∃ x. x * (t :: nat) = t"
by (rule exI[where x = "Suc 0"], simp)
Here, the existential quantifier introduction rule thm exI is
?P ?x ⟹ ∃x. ?P x
you can explore and instantiate it gradually with the answer.
thm exI[where x = "Suc 0"] is:
?P (Suc 0) ⟹ ∃x. ?P x
and exI[where P = "λ x. x * t = t" and x = "Suc 0"] is
Suc 0 * t = t ⟹ ∃x. x * t = t
And Suc 0 * t = t is only one simplification (simp) away. But the system can figure out the last instantiation P = "λ x. x * t = t" via unification, so it isn't really necessary.
Related:
Instantiating theorems in Isabelle

Resources