isabelle - Choose an arbitrary but fixed element - isabelle

How can an arbitrary but fixed element be selected from a set in Isabelle? The element selected will be used as random element from the set for further processing, but no other element must be used further.
My first attempt was:
theory Scratch
imports Main Orderings
begin
value "(let el ∈ {3::int, 4, 5} in el)"
end
But gives a syntax error.
My second attempt was:
theory Scratch
imports Main Orderings
begin
value "(let el = (SOME x . x ∈ {{3::int, 4},
{5::int, 6} ,
{7::int, 8}})
in el)"
end
giving a type int set and not the expected type int.
Edit 1
A new example:
theory Scratch
imports Main Orderings
begin
fun add :: "int set ⇒ int" where
"add st = (let el = (SOME x . x ∈ st) in el + (10::int))"
value "add {3::int, 4, 5, 6}"
end
The result of the code is:
"(SOME u. 3 = u ∨ 4 = u ∨ 5 = u ∨ 6 = u) + 10"
:: "int"
instead of an integer value. How do I write add so that the results is either 13, 14, 15 or 16? The exact value does not matter, it must just be different each time the function is executed.

The reason you got an int set as result is that you select an element from an int set set. In your second attempt, instead of using a "flat" set, you have used a nested set.
Apart from your specific question, I would recommend you look at the folding locale in the Finite_Set theory. It provides a combinator for folding over sets (given that the operator commutes).

You can define
definition "el = (SOME x. x ∈ {(3::int), 4, 5})"
You can then prove e.g.
lemma "el ∈ {3,4,5}"
unfolding el_def by (rule someI_ex) auto
Logically, el is some fixed element of {3, 4, 5} (as we just proved), and it is always the same element – but you don't know which one. You can think of it as ‘When the universe came into existence, it chose a value for SOME x. x ∈ {3,4,5}, either 3, 4, or 5, but it will never tell you which one it is.’
I don't know what it is exactly that you are trying to do, but I do not think that this is what you really want to do. Perhaps you can go into a bit more detail as to what you want to do with this element?

Related

Isabelle `subst` but replace right side with left side

Suppose the goal is P l, then I can use apply(subst X) where X is of the form l=r
and as a result I obtain P r. Now my question is whether there exists some other tactic like subst but which could use X to change P r into P l.
Here is an example
theorem mul_1_I : "(x::nat) = 1 * x" by (rule sym, rule Nat.nat_mult_1)
theorem "(λ x::nat . x) ≤ (λ x::nat . 2*x)"
using [[simp_trace]]
apply(rule le_funI)
apply(subst mul_1_I)
apply(rule mult_le_mono1)
apply(simp)
done
where
lemma nat_mult_1: "1 * n = n"
Right now I have to first prove this auxiliary lemma mul_1_I which applies sym to nat_mult_1 and only then I can use subst. Would be ideal if I didn't have to create new lemma specifically for this.
You can use the symmetric attribute to derive the swapped fact. For example, if x is of the form l = r, then x [symmetric] is the fact r = l (which is also valid due to the symmetry of =). Therefore, in your particular case you can use subst nat_mult_1 [symmetric] directly and avoid creating your auxiliary lemma.

Conditional rewrite rules with unknowns in condition

In my theory I have some bigger definitions, from which I derive some simple properties using lemmas.
My problem is, that the lemmas for deriving the properties are not used by the simplifier and I have to manually instantiate them. Is there a way to make this more automatic?
A minimal example is shown below:
definition complexFact :: "int ⇒ int ⇒ int ⇒ bool" where
"complexFact x y z ≡ x = y + z"
lemma useComplexFact: "complexFact x y z ⟹ x = y + z"
by (simp add: complexFact_def)
lemma example_1:
assumes cf: "complexFact a b c"
shows "a = b + c"
apply (simp add: cf useComplexFact) (* easy, works *)
done
lemma example_2a:
assumes cf: "complexFact a b c"
shows "a - b = c"
apply (simp add: cf useComplexFact) (* does not work *)
oops
lemma example_2b:
assumes cf: "complexFact a b c"
shows "a - b = c"
apply (simp add: useComplexFact[OF cf]) (* works *)
done
lemma example_2c:
assumes cf: "complexFact a b c"
shows "a - b = c"
apply (subst useComplexFact) (* manually it also works*)
apply (subst cf)
apply simp+
done
I found the following paragraph in the reference manual, so I guess I could solve my problem with a custom solver.
However, I never really touched the internal ML part of Isabelle and don't know where to start.
Rewriting does not instantiate unknowns. For example, rewriting alone
can- not prove a ∈ ?A since this requires instantiating ?A . The
solver, however, is an arbitrary tactic and may instantiate unknowns
as it pleases. This is the only way the Simplifier can handle a
conditional rewrite rule whose condition contains extra variables.
The Isabelle simplifier by itself never instantiates unknowns in assumptions of conditional rewrite rules. However, the solvers can do that, and the most reliable one is assumption. So if complex_fact a b c literally appears in the assumptions of the goal (rather than being added to the simpset with simp add: or with [simp]), the assumption solver kicks in and instantiated the unknowns. However, it will only use the first instance of complex_fact in the assumptions. So if there are several of them, it will not try all of them. In summary, it is better to write
lemma
assumes cf: "complexFact a b c"
shows "a = b + c"
using cf
apply(simp add: useComplexFact)
The second problem with your example is that a = b + c with a, b, and c being free is not a good rewrite rule, because the head symbol on the left-hand side is not a constant, but the free variable a. Therefore, the simplifier will not use the equation a = b + c to replace a with b + c, but to replace literal occurrences of the equation a = b + c with True. You can see this preprocessing in the trace of the simplifer (enable it locally with using [[simp_trace]]). That's the reason why example_1 works and the others don't. If you can change your left hand side such that there is a constant as the head symbol, then some decent proof automation should be possible without writing a custom solver.
Further, you can do some (limited) form of forward reasoning by using useComplexFact as a destruction rule. That is,
using assms
apply(auto dest!: useComplexFact)
can also work in some cases. However, this is pretty close to unfolding the definition in terms of scalability.

Instantiating variables ending in a digit using where-attribute (Isabelle)

In Isabelle, given a theorem thm with a free variable x (more precisely, a schematic variable), one can instantiate x using the where-attribute.
E.g., thm[where x=5]
I am unable to make this work if the variable name ends in a number, e.g., thm[where x1=5]. This seems to be due to the fact that the variable is represented in the theorem as "?x1.0" and not as "?x1".
The theory below gives an example.
My question is: How do I instantiate x1 in such a theorem? (E.g., the theorem in the theory below.)
"Solutions" that I am aware of:
- Using thm[of 1] instead of thm[where x1=1]. That works in some cases, but for theorems with many variables this becomes very unwieldy and unstable (the order of variables may change).
- Using only variable names not ending in digits. That would work, but sometimes variables like x1 are very natural in the given context.
theory Tmp imports Main begin
lemma l1: "x+y=y+(x::nat)" by simp
thm l1[where x=1]
(* Prints: 1 + ?y = ?y + 1 *)
lemma l2: "x1+x2=x1+(x2::nat)" by simp
thm l2[where x1=1]
(* Prints: No such variable in theorem: "?x1" *)
thm l2
(* Prints: ?x1.0 + ?x2.0 = ?x1.0 + ?x2.0 *)
You must use the full name of the schematic variable including the question mark:
thm l2[where ?x1.0 = 1]

Calculating transitive closures

I have the following definition:
definition someRel :: "nat rel"
where
"someRel = {(1, 2), (2, 3), (3, 4), (4, 5)}"
I want to prove the following lemma:
lemma "someRel^*``{1}={1, 2, 3, 4, 5}"
I have devised the following proof:
proof
show "someRel^*``{1} ⊆ {1, 2, 3, 4, 5}"
proof
fix x
assume "x ∈ someRel⇧* `` {1}"
then show "x ∈ {1, 2, 3, 4, 5}"
using assms someRel_def by (auto elim: rtranclE)
qed
next
show "{1, 2, 3, 4, 5} ⊆ someRel^*``{1}"
proof
fix x
assume "x ∈ {1::nat, 2, 3, 4, 5}"
then show "x ∈ someRel⇧* `` {1}"
using assms someRel_def Image_singleton by (induction) blast+
qed
qed
This proof has the following issues:
The first part (show "someRel^*``{1} ⊆ {1, 2, 3, 4, 5}") is proved using the rule
rtranclE. This does not work if I add one more pair to the someRel relation (say the pair (6, 7))
The proof of the second part (show "{1, 2, 3, 4, 5} ⊆ someRel^*``{1}") does not terminate.
Can anyone suggest a better proof? That (a) allows for more pairs in the someRel relation and (b) that terminates.
It turns out that for your specific instance (and some slightly bigger ones I tried), the following suffices (found by first applying auto and then running sledgehammer on the remaining goals to identify useful facts, like converse_rtrancl_into_rtrancl here):
by (auto simp: someRel_def converse_rtrancl_into_rtrancl elim: rtranclE)
However, in general it might be a better idea to do one of the following:
device a tactic to prove such goals (by actually computing the involved transitive closure)
compute the transitive closure inside Isabelle/HOL (either via simp -- which might be slow -- or via eval -- which, as far as I know is kind of an oracle).
For the latter the AFP entry
Executable Transitive Closures might be of interest.
Update: I added an example of a simproc that computes images of finite transitive closures over finite sets by evaluation to the development version of the AFP. Instead of Executable Transitive Closures however, I based the example on
Executable Transitive Closures of Finite Relations. Your example can be found at the end of theory
Finite_Transitive_Closure_Simprocs (as soon as the AFP website is synchronized with the underlying mercurial repository).
Update: Note that the above mentioned simproc is specifically aimed at patterns of the form r^* `` x where the sets r and x are finite in the sense that they are given in finite set notation {x1, x2, ..., xN}. Thus, in order to fire on a specific goal you might have to add additional facts / simp rules / simprocs / ... in order to normalize an expression into this form.
Example: If you had the goal
"(converse someRel)^* `` {1} = {1}"
you would have to add rules that actually "apply" the converse operation on the given finite set. The following would do:
lemma [simp]:
"converse (insert (x, y) A) = insert (y, x) (converse A)"
by auto
Now the goal could be solved via
by (auto simp: someRel_def)
Adding to Chris' answer, here is a full version which uses the AFP-entry for transitive closures, and which does use code-simp instead of eval. code-simp is a bit slower than eval, but does not rely upon oracles.
theory Test
imports "$AFP/Transitive-Closure/Transitive_Closure_List_Impl"
begin
lemma to_memo_list: "(set xs)^* `` {a} = set (memo_list_rtrancl xs a)"
unfolding memo_list_rtrancl Image_def by auto
definition someRel :: "nat rel"
where
"someRel = {(1, 2), (2, 3), (3, 4), (4, 5), (5,3)}"
definition someRel_list :: "(nat × nat)list"
where
"someRel_list = [(1, 2), (2, 3), (3, 4), (4, 5), (5,3)]"
lemma someRel_list: "someRel = set someRel_list" by code_simp
lemma "someRel^*``{4}={3, 4, 5}"
unfolding someRel_list to_memo_list by code_simp
end

Partial functions versus under-specified total functions

Suppose I have a set A ⊆ nat. I want to model in Isabelle a function f : A ⇒ Y. I could use either:
a partial function, i.e. one of type nat ⇒ Y option, or
a total function, i.e. one of type nat ⇒ Y that is unspecified for inputs not in A.
I wonder which is the 'better' option. I see a couple of factors:
The "partial function" approach is better because it is easier to compare partial functions for equality. That is, if I want to see if f is equal to another function, g : A ⇒ Y, then I just say f = g. To compare under-specified total functions f and g, I would have to say ∀x ∈ A. f x = g x.
The "under-specified total function" approach is better because I don't have to faff with the constructing/deconstructing option types all the time. For instance, if f is an under-specified total function, and x ∈ A, then I can just say f x, but if f is a partial function I would have to say (the ∘ f) x. For another instance, it's trickier to do function composition on partial functions than on total functions.
For a concrete instance relevant to this question, consider the following attempt at formalising simple graphs.
type_synonym node = nat
record 'a graph =
V :: "node set"
E :: "(node × node) set"
label :: "node ⇒ 'a"
A graph comprises a set of nodes, an edge relation between them, and a label for each node. We only care about the label of nodes that are in V. So, should label be a partial function node ⇒ 'a option with dom label = V, or should it just be a total function that is unspecified outside of V?
It is probably a matter of taste and may also depend on the use you have in mind, so I'll just give you my personal taste, which would be option 2. the total function. The reason is that I think the bounded quantification in both approaches will be unavoidable anyway. I think that with approach 1. you will find that the easiest way to handle the Option is to limit the domain (bounded quantification) that you are reasoning about. As for the graph example, graph theorems always say something like for all nodes in V. But as I said, it is probably a matter of taste.

Resources