Instantiating variables ending in a digit using where-attribute (Isabelle) - isabelle

In Isabelle, given a theorem thm with a free variable x (more precisely, a schematic variable), one can instantiate x using the where-attribute.
E.g., thm[where x=5]
I am unable to make this work if the variable name ends in a number, e.g., thm[where x1=5]. This seems to be due to the fact that the variable is represented in the theorem as "?x1.0" and not as "?x1".
The theory below gives an example.
My question is: How do I instantiate x1 in such a theorem? (E.g., the theorem in the theory below.)
"Solutions" that I am aware of:
- Using thm[of 1] instead of thm[where x1=1]. That works in some cases, but for theorems with many variables this becomes very unwieldy and unstable (the order of variables may change).
- Using only variable names not ending in digits. That would work, but sometimes variables like x1 are very natural in the given context.
theory Tmp imports Main begin
lemma l1: "x+y=y+(x::nat)" by simp
thm l1[where x=1]
(* Prints: 1 + ?y = ?y + 1 *)
lemma l2: "x1+x2=x1+(x2::nat)" by simp
thm l2[where x1=1]
(* Prints: No such variable in theorem: "?x1" *)
thm l2
(* Prints: ?x1.0 + ?x2.0 = ?x1.0 + ?x2.0 *)

You must use the full name of the schematic variable including the question mark:
thm l2[where ?x1.0 = 1]

Related

Isabelle `subst` but replace right side with left side

Suppose the goal is P l, then I can use apply(subst X) where X is of the form l=r
and as a result I obtain P r. Now my question is whether there exists some other tactic like subst but which could use X to change P r into P l.
Here is an example
theorem mul_1_I : "(x::nat) = 1 * x" by (rule sym, rule Nat.nat_mult_1)
theorem "(λ x::nat . x) ≤ (λ x::nat . 2*x)"
using [[simp_trace]]
apply(rule le_funI)
apply(subst mul_1_I)
apply(rule mult_le_mono1)
apply(simp)
done
where
lemma nat_mult_1: "1 * n = n"
Right now I have to first prove this auxiliary lemma mul_1_I which applies sym to nat_mult_1 and only then I can use subst. Would be ideal if I didn't have to create new lemma specifically for this.
You can use the symmetric attribute to derive the swapped fact. For example, if x is of the form l = r, then x [symmetric] is the fact r = l (which is also valid due to the symmetry of =). Therefore, in your particular case you can use subst nat_mult_1 [symmetric] directly and avoid creating your auxiliary lemma.

Conditional rewrite rules with unknowns in condition

In my theory I have some bigger definitions, from which I derive some simple properties using lemmas.
My problem is, that the lemmas for deriving the properties are not used by the simplifier and I have to manually instantiate them. Is there a way to make this more automatic?
A minimal example is shown below:
definition complexFact :: "int ⇒ int ⇒ int ⇒ bool" where
"complexFact x y z ≡ x = y + z"
lemma useComplexFact: "complexFact x y z ⟹ x = y + z"
by (simp add: complexFact_def)
lemma example_1:
assumes cf: "complexFact a b c"
shows "a = b + c"
apply (simp add: cf useComplexFact) (* easy, works *)
done
lemma example_2a:
assumes cf: "complexFact a b c"
shows "a - b = c"
apply (simp add: cf useComplexFact) (* does not work *)
oops
lemma example_2b:
assumes cf: "complexFact a b c"
shows "a - b = c"
apply (simp add: useComplexFact[OF cf]) (* works *)
done
lemma example_2c:
assumes cf: "complexFact a b c"
shows "a - b = c"
apply (subst useComplexFact) (* manually it also works*)
apply (subst cf)
apply simp+
done
I found the following paragraph in the reference manual, so I guess I could solve my problem with a custom solver.
However, I never really touched the internal ML part of Isabelle and don't know where to start.
Rewriting does not instantiate unknowns. For example, rewriting alone
can- not prove a ∈ ?A since this requires instantiating ?A . The
solver, however, is an arbitrary tactic and may instantiate unknowns
as it pleases. This is the only way the Simplifier can handle a
conditional rewrite rule whose condition contains extra variables.
The Isabelle simplifier by itself never instantiates unknowns in assumptions of conditional rewrite rules. However, the solvers can do that, and the most reliable one is assumption. So if complex_fact a b c literally appears in the assumptions of the goal (rather than being added to the simpset with simp add: or with [simp]), the assumption solver kicks in and instantiated the unknowns. However, it will only use the first instance of complex_fact in the assumptions. So if there are several of them, it will not try all of them. In summary, it is better to write
lemma
assumes cf: "complexFact a b c"
shows "a = b + c"
using cf
apply(simp add: useComplexFact)
The second problem with your example is that a = b + c with a, b, and c being free is not a good rewrite rule, because the head symbol on the left-hand side is not a constant, but the free variable a. Therefore, the simplifier will not use the equation a = b + c to replace a with b + c, but to replace literal occurrences of the equation a = b + c with True. You can see this preprocessing in the trace of the simplifer (enable it locally with using [[simp_trace]]). That's the reason why example_1 works and the others don't. If you can change your left hand side such that there is a constant as the head symbol, then some decent proof automation should be possible without writing a custom solver.
Further, you can do some (limited) form of forward reasoning by using useComplexFact as a destruction rule. That is,
using assms
apply(auto dest!: useComplexFact)
can also work in some cases. However, this is pretty close to unfolding the definition in terms of scalability.

equivalence of arithmetic expressions using algebra_simps

In Programming and Proving in Isabelle/HOL there is Exercise 2.4 which suggests to use 'algebra_simps' on simple arithmetic expressions, represented as 'datatype exp'. Could somebody give an example how some simple properties of such expressions could be proven using algebra_simps? For example 'Mult a b = Mult b a'?
In general I am trying to prove equivalence of simple arithmetic expressions represented in similar form (with limited set of operators).
If you have defined your eval function appropriately, you can prove the property you gave in your example like this:
lemma Mult_comm: "eval (Mult a b) x = eval (Mult b a) x"
by simp
algebra_simps is just a collection of basic simplification rules for groups and rings (such as the integers, in this case). They have nothing to do with this particular example. You can look at the lemmas contained by typing thm algebra_simps.
For this particular proof, you don't actually need algebra_simps, because commutativity of integer multiplication is already a default simplifier rule anyway.
So, to show how to use algebra_simps, consider an example where you actually do need them: right distributivity of multiplication:
lemma Mult_distrib_right: "eval (Mult (Add a b) c) x = eval (Add (Mult a c) (Mult b c)) x"
If you just try apply simp on this, you will get stuck with the goal
(eval a x + eval b x) * eval c x =
eval a x * eval c x + eval b x * eval c x
Luckily, the rule algebra_simps(4) is a rule that says just that: thm algebra_simps(4) will show you that this rule is (?a + ?b) * ?c = ?a * ?c + ?b * ?c. Isabelle's simplifier will apply it automatically if you tell it to use the algebra_simps rules, by doing:
apply (simp add: algebra_simps)
instead of
apply simp

Isabelle: adjusting lemma to form required for `rule` method

I define an inductive relation called step_g. Here is one of the inference rules:
G_No_Op:
"∀j ∈ the (T i). ¬ (eval_bool p (the (γ ⇩t⇩s j)))
⟹ step_g a i T (γ, (Barrier, p)) (Some γ)"
I want to invoke this rule in a proof, so I type
apply (rule step_g.G_No_Op)
but the rule cannot be applied, because its conclusion must be of a particular form already (the two γ's must match). So I adapt the rule like so:
lemma G_No_Op_helper:
"⟦ ∀j ∈ the (T i). ¬ (eval_bool p (the (γ ⇩t⇩s j))) ; γ = γ' ⟧
⟹ step_g a i T (γ, (Barrier, p)) (Some γ')"
by (simp add: step_g.G_No_Op)
Now, when I invoke rule G_No_Op_helper, the requirement that "the two γ's must match" becomes a subgoal to be proven.
The transformation of G_No_Op into G_No_Op_helper looks rather mechanical. My question is: is there a way to make Isabelle do this automatically?
Edit. I came up with a "minimal working example". In the following, lemma A is equivalent to A2, but rule A doesn't help to prove the theorem, only rule A2 works.
consts foo :: "nat ⇒ nat ⇒ nat ⇒ bool"
lemma A: "x < y ⟹ foo y x x"
sorry
lemma A2: "⟦ x < y ; x = z ⟧ ⟹ foo y x z"
sorry
theorem "foo y x z"
apply (rule A)
To my knowledge, nothing exists to automate these things. One could probably implement this as an attribute, i.e.
thm A[generalised x]
to obtain something like A2. The attribute would replace every occurence of the variable it is given (i.e. x here) but the first in the conclusion of the theorem with a fresh variable x' and add the premise x' = x to the theorem.
This shouldn't be very hard to implement for someone more skilled in Isabelle/ML than me – maybe some of the advanced Isabelle/ML hackers who read this could comment on the idea.
There is a well-known principle "proof-by-definition", i.e. you write your initial specifications in a way such the the resulting rules are easy to apply. This might occasionally look unexpected to informal readers, but is normal for formalists.
I had similar problems and wrote a method named fuzzy_rule, which can be used like this:
theorem "foo y x z"
apply (fuzzy_rule A)
subgoal "x < y"
sorry
subgoal "x = z"
sorry
The code is available at https://github.com/peterzeller/isabelle_fuzzy_rule

Partial functions versus under-specified total functions

Suppose I have a set A ⊆ nat. I want to model in Isabelle a function f : A ⇒ Y. I could use either:
a partial function, i.e. one of type nat ⇒ Y option, or
a total function, i.e. one of type nat ⇒ Y that is unspecified for inputs not in A.
I wonder which is the 'better' option. I see a couple of factors:
The "partial function" approach is better because it is easier to compare partial functions for equality. That is, if I want to see if f is equal to another function, g : A ⇒ Y, then I just say f = g. To compare under-specified total functions f and g, I would have to say ∀x ∈ A. f x = g x.
The "under-specified total function" approach is better because I don't have to faff with the constructing/deconstructing option types all the time. For instance, if f is an under-specified total function, and x ∈ A, then I can just say f x, but if f is a partial function I would have to say (the ∘ f) x. For another instance, it's trickier to do function composition on partial functions than on total functions.
For a concrete instance relevant to this question, consider the following attempt at formalising simple graphs.
type_synonym node = nat
record 'a graph =
V :: "node set"
E :: "(node × node) set"
label :: "node ⇒ 'a"
A graph comprises a set of nodes, an edge relation between them, and a label for each node. We only care about the label of nodes that are in V. So, should label be a partial function node ⇒ 'a option with dom label = V, or should it just be a total function that is unspecified outside of V?
It is probably a matter of taste and may also depend on the use you have in mind, so I'll just give you my personal taste, which would be option 2. the total function. The reason is that I think the bounded quantification in both approaches will be unavoidable anyway. I think that with approach 1. you will find that the easiest way to handle the Option is to limit the domain (bounded quantification) that you are reasoning about. As for the graph example, graph theorems always say something like for all nodes in V. But as I said, it is probably a matter of taste.

Resources