In the example below, I want to use simp to prove that some terms from
simply typed lambda calculus typecheck.
I add each typechecking rule as a rewriting rule for simp, so simp performs
conditional rewrites and creates schematic variables along the way.
However, while rewriting the side conditions for some rewrites, simp gets
stuck on rewriting terms involving schematic variables, because it does not
instantiate them:
theory Stlc imports Main
begin
type_synonym var = string
datatype exp =
Var var
| Const nat
| Plus exp exp
| Abs var exp
| App exp exp
datatype type =
Nat |
Fun type type
type_synonym ('k, 'v) fmap = "'k ⇒ 'v option"
definition lookup :: "('k, 'v) fmap ⇒ 'k ⇒ 'v option" where
"lookup m x = m x"
definition add :: "('k, 'v) fmap ⇒ 'k ⇒ 'v ⇒ ('k, 'v) fmap" where
"add m x a = (λy. if y = x then Some a else m y)"
definition empty :: "('k, 'v) fmap" where
"empty = (λy. None)"
notation
lookup (infix "$?" 60) and
add ("_ $+ '( _ ', _ ')") and
empty ("$0")
inductive hasty :: "(var, type) fmap ⇒ exp ⇒ type ⇒ bool" where
HtVar:
"G $? x = Some t
⟹ hasty G (Var x) t" |
HtConst:
"hasty G (Const n) Nat" |
HtPlus:
"⟦ hasty G e1 Nat;
hasty G e2 Nat ⟧
⟹ hasty G (Plus e1 e2) Nat" |
HtAbs:
"hasty (G $+ (x, t1)) e1 t2
⟹ hasty G (Abs x e1) (Fun t1 t2)" |
HtApp:
"⟦ hasty G e1 (Fun t1 t2);
hasty G e2 t1 ⟧
⟹ hasty G (App e1 e2) t2"
named_theorems my_simps "simplification rules for typechecking"
declare HtVar [my_simps]
declare HtConst [my_simps]
declare HtPlus [my_simps]
declare HtAbs [my_simps]
declare HtApp [my_simps]
declare lookup_def [my_simps]
declare add_def [my_simps]
lemma "hasty $0 (Plus (Const 1) (Const 1)) Nat"
using [[simp_trace_new mode=full]]
apply(simp add: my_simps)
done
lemma "hasty $0 (Abs ''n'' (Abs ''m'' (Plus (Var ''n'') (Var ''m''))))
(Fun Nat (Fun Nat Nat))"
apply (simp add: my_simps)
done
lemma "⟦P ∧ Q ⟧ ⟹ Q"
apply (rule conjE)
apply(simp) (* note: this simp step does instantiate schematic variables *)
apply assumption
done
(* but here, it seems that simp does not instantiate schematic variables: *)
lemma eleven: "hasty $0 (App (App
(Abs ''n'' (Abs ''m'' (Plus (Var ''n'') (Var ''m''))))
(Const 7)) (Const 4)) Nat"
using [[simp_trace_new mode=full]]
apply (simp add: my_simps) (* seems to fail on unifying "?t1.3 = type.Nat" *)
The relevant part of the simplifier trace (I guess) is the following:
Apply rewrite rule?
Instance of Option.option.inject: Some ?t1.3 = Some type.Nat ≡ ?t1.3 = type.Nat
Trying to rewrite: Some ?t1.3 = Some type.Nat
Successfully rewrote
Some ?t1.3 = Some type.Nat ≡ ?t1.3 = type.Nat
Step failed
In an instance of Stlc.hasty.HtVar:
(λy. if y = ''m'' then Some ?t1.1 else if y = ''n'' then Some ?t1.3 else $0 y) $? ''n'' = Some type.Nat ⟹
hasty (λy. if y = ''m'' then Some ?t1.1 else if y = ''n'' then Some ?t1.3 else $0 y) (Var ''n'') type.Nat ≡ True
Was trying to rewrite:
hasty (λy. if y = ''m'' then Some ?t1.1 else if y = ''n'' then Some ?t1.3 else $0 y) (Var ''n'') type.Nat
Just before the failing step, rewriting stops at ?t1.3 = type.Nat.
However, I would like ?t1.3 = type.Nat to be rewritten to True, and
?t1.3 be instantiated to type.Nat along the way.
How can I achieve this?
Isabelle's simplifier on its own never instantiates any schematic variables in the goal. This is only done by the so-called solvers. for example, the solver HOL unsafe tries among others the tactics rule refl and assumption. This is why the example with ⟦P ∧ Q ⟧ ⟹ Q works with simp.
For solving the assumptions of conditional rewrite rules like HtVar, the subgoaler also plays a role. The subgoaler determines how the conditions should be solved. By default in HOL, this is asm_simp_tac, i.e., the equivalent to the method simp (no_asm_simp). This subgoaler cannot handle the instantiation of schematics in the assumption. You can see this by enabling the other simplifier trace:
using [[simp_trace]] supply [[simp_trace_depth_limit=10]]
apply (simp add: my_simps)
gives the following trace message:
[6]Proved wrong theorem (bad subgoaler?)
hasty (λy. if y = ''m'' then Some ?t1.1 else if y = ''n'' then Some type.Nat else $0 y) (Var ''n'') type.Nat ≡ True
Should have proved:
hasty (λy. if y = ''m'' then Some ?t1.1 else if y = ''n'' then Some ?t1.3 else $0 y) (Var ''n'') type.Nat
So if you want to use the simplifier for this kind of type checking, you need a different subgoaler. I'm not enough of an expert to help you with that. You can find more documentation in the Isabelle/Isar reference manual, section 9.3.6.
Instead, I recomment that you write your own type inference method (e.g., using Eisbach) that applies the type inference rules and calls the simplifier as needed. This avoids the problems with the subgoaler.
It seems that for this kind of proof goals, which can be solved by repeatedly applying an inference rule, one should use the classical reasoner (auto) rather than the rewriter (simpl).
If I declare all typing rules as safe intro rules:
declare HtVar [intro!]
declare HtConst [intro!]
declare HtPlus [intro!]
declare HtAbs [intro!]
declare HtApp [intro!]
Then most of my lemma is proved by auto, which leaves open two lookup goals that can be solved by simp:
lemma eleven: "hasty $0 (App (App
(Abs ''n'' (Abs ''m'' (Plus (Var ''n'') (Var ''m''))))
(Const 7)) (Const 4)) Nat"
apply(auto)
apply(simp_all add: my_simps)
done
Also, given a classical reasoner such as auto, one can easily specify what rewriter to use on the remaining subgoals as follows, so the above proof can be condensed into one line:
apply(auto simp add: my_simps)
Whereas, given a rewriter such as simp, it seems to be a bit more involved to specify what method to use on the remaining subgoals.
Related
I have datatype stack_op which consists of several (~20) cases. I'm trying write function which skips some of that cases in list:
function (sequential) skip_expr :: "stack_op list ⇒ stack_op list" where
"skip_expr [] = []"
| "skip_expr ((stack_op.Unary _)#other) = (skip_expr other)"
| "skip_expr ((stack_op.Binary _)#other) = skip_expr (skip_expr other)"
| "skip_expr ((stack_op.Value _)#other) = other"
| "skip_expr other = other"
by pat_completeness auto termination by lexicographic_order
which seems to always terminate. But trying by lexicographic order generates such unresolved cases:
Calls:
c) stack_op.Binary uv_ # other ~> skip_expr other
Measures:
1) size_list size
2) length
Result matrix:
1 2
c: ? ?
(size_change also desn't work)
I've read https://isabelle.in.tum.de/dist/Isabelle2021/doc/functions.pdf, but it couldn't help. (Maybe there are more complex examples of tremination use?)
I tried to rewrite function adding another param:
function (sequential) skip_expr :: "stack_op list ⇒ nat ⇒ stack_op list" where
"skip_expr l 0 = l"
| "skip_expr [] _ = []"
| "skip_expr ((stack_op.Unary _)#other) depth = (skip_expr other (depth - 1))"
| "skip_expr ((stack_op.Binary _)#other) depth =
(let buff1 = (skip_expr other (depth - 1))
in (skip_expr buff1 (length buff1)))"
| "skip_expr ((stack_op.Value _)#other) _ = other"
| "skip_expr other _ = other"
by pat_completeness auto
termination by (relation "measure (λ(_,dep). dep)") auto
which generates unresolved subgoal:
1. ⋀other v. skip_expr_dom (other, v) ⟹ length (skip_expr other v) < Suc v
which I also don't how to proof.
Could anyone how such cases solved (As I can understand there is some problem with two-level recursive call on rigth side of stack_op.Binary case)? Or maybe there is another way to make such skip?
Thanks in advance
The lexicographic_order method simply tries to solve the arising goals with the simplifier, so if the simplifier gets stuck you end up with unresolved termination subgoals.
In this case, as you identified correctly, the problem is that you have a nested recursive call skip_expr (skip_expr other). This is always problematic because at this stage, the simplifier knows nothing about what skip_expr does to the input list. For all we know, it might just return the list unmodified, or even a longer list, and then it surely would not terminate.
Confronting the issue head on
The solution is to show something about length (skip_expr …) and make that information available to the simplifier. Because we have not yet shown termination of the function, we have to use the skip_expr.psimps rules and the partial induction rule skip_expr.pinduct, i.e. every statement we make about skip_expr xs always has as a precondition that skip_expr actually terminates on the input xs. For this, there is the predicate skip_expr_dom.
Putting it all together, it looks like this:
lemma length_skip_expr [termination_simp]:
"skip_expr_dom xs ⟹ length (skip_expr xs) ≤ length xs"
by (induction xs rule: skip_expr.pinduct) (auto simp: skip_expr.psimps)
termination skip_expr by lexicographic_order
Circumventing the issue
Sometimes it can also be easier to circumvent the issue entirely. In your case, you could e.g. define a more general function skip_exprs that skips not just one instruction but n instructions. This you can define without nested induction:
fun skip_exprs :: "nat ⇒ stack_op list ⇒ stack_op list" where
"skip_exprs 0 xs = xs"
| "skip_exprs (Suc n) [] = []"
| "skip_exprs (Suc n) (Unary _ # other) = skip_exprs (Suc n) other"
| "skip_exprs (Suc n) (Binary _ # other) = skip_exprs (Suc (Suc n)) other"
| "skip_exprs (Suc n) (Value _ # other) = skip_exprs n other"
| "skip_exprs (Suc n) xs = xs"
Equivalence to your skip_expr is then straightforward to prove:
lemma skip_exprs_conv_skip_expr: "skip_exprs n xs = (skip_expr ^^ n) xs"
proof -
have [simp]: "(skip_expr ^^ n) [] = []" for n
by (induction n) auto
have [simp]: "(skip_expr ^^ n) (Other # xs) = Other # xs" for xs n
by (induction n) auto
show ?thesis
by (induction n xs rule: skip_exprs.induct)
(auto simp del: funpow.simps simp: funpow_Suc_right)
qed
lemma skip_expr_Suc_0 [simp]: "skip_exprs (Suc 0) xs = skip_expr xs"
by (simp add: skip_exprs_conv_skip_expr)
In your case, I don't think it actually makes sense to do this because figuring out the termination is fairly easy, but it may be good to keep in mind.
Isabelle has some automation for quotient reasoning through the quotient package. I would like to see if that automation is of any use for my example. The relevant definitions is:
definition e_proj where "e_proj = e'_aff_bit // gluing"
So I try to write:
typedef e_aff_t = e'_aff_bit
quotient_type e_proj_t = "e'_aff_bit" / "gluing
However, I get the error:
Extra type variables in representing set: "'a"
The error(s) above occurred in typedef "e_aff_t"
Because as Manuel Eberl explains here, we cannot have type definitions that depend on type parameters. In the past, I was suggested to use the type-to-sets approach.
How would that approach work in my example? Would it lead to more automation?
In the past, I was suggested to use the type-to-sets approach ...
The suggestion that was made in my previous answer was to use the standard set-based infrastructure for reasoning about quotients. I only mentioned that there exist other options for completeness.
I still believe that it is best not to use Types-To-Sets, provided that the definition of a quotient type is the only reason why you wish to use Types-To-Sets:
Even with Types-To-Sets, you will only be able to mimic the behavior of a quotient type in a local context with certain additional assumptions. Upon leaving the local context, the theorems that use locally defined quotient types would need to be converted to the set-based theorems that would inevitably rely on the standard set-based infrastructure for reasoning about quotients.
One would need to develop additional Isabelle/ML infrastructure before Local Typedef Rule can be used to define quotient types locally conveniently. It should not be too difficult to develop an infrastructure that is useable, but it would take some time to develop something that is universally applicable. Personally, I do not consider this application to be sufficiently important to invest my time in it.
In my view, it is only viable to use Types-To-Sets for the definition of quotient types locally if you are already using Types-To-Sets for its intended purpose in a given development. Then, the possibility of using the framework for the definition of quotient types locally can be seen as a 'value-added benefit'.
For completeness, I provide an example that I developed for an answer on the mailing list some time ago. Of course, this is merely the demonstration of the concept, not a solution that can be used for work that is meant to be published in some form. To make this useable, one would need to convert this development to an Isabelle/ML command that would take care of all the details automatically.
theory Scratch
imports Main
"HOL-Types_To_Sets.Prerequisites"
"HOL-Types_To_Sets.Types_To_Sets"
begin
locale local_typedef =
fixes R :: "['a, 'a] ⇒ bool"
assumes is_equivalence: "equivp R"
begin
(*The exposition subsumes some of the content of
HOL/Types_To_Sets/Examples/Prerequisites.thy*)
context
fixes S and s :: "'s itself"
defines S: "S ≡ {x. ∃u. x = {v. R u v}}"
assumes Ex_type_definition_S:
"∃(Rep::'s ⇒ 'a set) (Abs::'a set ⇒ 's). type_definition Rep Abs S"
begin
definition "rep = fst (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "Abs = snd (SOME (Rep::'s ⇒ 'a set, Abs). type_definition Rep
Abs S)"
definition "rep' a = (SOME x. a ∈ S ⟶ x ∈ a)"
definition "Abs' x = (SOME a. a ∈ S ∧ a = {v. R x v})"
definition "rep'' = rep' o rep"
definition "Abs'' = Abs o Abs'"
lemma type_definition_S: "type_definition rep Abs S"
unfolding Abs_def rep_def split_beta'
by (rule someI_ex) (use Ex_type_definition_S in auto)
lemma rep_in_S[simp]: "rep x ∈ S"
and rep_inverse[simp]: "Abs (rep x) = x"
and Abs_inverse[simp]: "y ∈ S ⟹ rep (Abs y) = y"
using type_definition_S
unfolding type_definition_def by auto
definition cr_S where "cr_S ≡ λs b. s = rep b"
lemmas Domainp_cr_S = type_definition_Domainp[OF type_definition_S
cr_S_def, transfer_domain_rule]
lemmas right_total_cr_S = typedef_right_total[OF type_definition_S
cr_S_def, transfer_rule]
and bi_unique_cr_S = typedef_bi_unique[OF type_definition_S cr_S_def,
transfer_rule]
and left_unique_cr_S = typedef_left_unique[OF type_definition_S cr_S_def,
transfer_rule]
and right_unique_cr_S = typedef_right_unique[OF type_definition_S
cr_S_def, transfer_rule]
lemma cr_S_rep[intro, simp]: "cr_S (rep a) a" by (simp add: cr_S_def)
lemma cr_S_Abs[intro, simp]: "a∈S ⟹ cr_S a (Abs a)" by (simp add: cr_S_def)
(* this part was sledgehammered - please do not pay attention to the
(absence of) proof style *)
lemma r1: "∀a. Abs'' (rep'' a) = a"
unfolding Abs''_def rep''_def comp_def
proof-
{
fix s'
note repS = rep_in_S[of s']
then have "∃x. x ∈ rep s'" using S equivp_reflp is_equivalence by force
then have "rep' (rep s') ∈ rep s'"
using repS unfolding rep'_def by (metis verit_sko_ex')
moreover with is_equivalence repS have "rep s' = {v. R (rep' (rep s'))
v}"
by (smt CollectD S equivp_def)
ultimately have arr: "Abs' (rep' (rep s')) = rep s'"
unfolding Abs'_def by (smt repS some_sym_eq_trivial verit_sko_ex')
have "Abs (Abs' (rep' (rep s'))) = s'" unfolding arr by (rule
rep_inverse)
}
then show "∀a. Abs (Abs' (rep' (rep a))) = a" by auto
qed
lemma r2: "∀a. R (rep'' a) (rep'' a)"
unfolding rep''_def rep'_def
using is_equivalence unfolding equivp_def by blast
lemma r3: "∀r s. R r s = (R r r ∧ R s s ∧ Abs'' r = Abs'' s)"
apply(intro allI)
apply standard
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def by auto
subgoal unfolding Abs''_def Abs'_def
using is_equivalence unfolding equivp_def
by (smt Abs''_def Abs'_def CollectD S comp_apply local.Abs_inverse
mem_Collect_eq someI_ex)
done
definition cr_Q where "cr_Q = (λx y. R x x ∧ Abs'' x = y)"
lemma quotient_Q: "Quotient R Abs'' rep'' cr_Q"
unfolding Quotient_def
apply(intro conjI)
subgoal by (rule r1)
subgoal by (rule r2)
subgoal by (rule r3)
subgoal by (rule cr_Q_def)
done
(* instantiate the quotient lemmas from the theory Lifting *)
lemmas Q_Quotient_abs_rep = Quotient_abs_rep[OF quotient_Q]
(*...*)
(* prove the statements about the quotient type 's *)
(*...*)
(* transfer the results back to 'a using the capabilities of transfer -
not demonstrated in the example *)
lemma aa: "(a::'a) = (a::'a)"
by auto
end
thm aa[cancel_type_definition]
(* this shows {x. ∃u. x = {v. R u v}} ≠ {} ⟹ ?a = ?a *)
end
Here is very simple language:
type_synonym vname = "string"
type_synonym bool3 = "bool option"
type_synonym env = "vname ⇒ bool3"
datatype exp = Let vname bool exp | Var vname | And exp exp
primrec val :: "exp ⇒ env ⇒ bool3" where
"val (Let var init body) e = val body (e(var ↦ init))"
| "val (Var var) e = e var"
| "val (And a b) e = (case (val a e, val b e) of
(Some x, Some y) ⇒ Some (x ∧ y) | _ ⇒ None)"
I'm trying to prove that if an expression doesn't have any free variables, then I can declare any new variable at the begining of the expression. I've tried 3 approaches to prove it.
1) defined function checks whether the expression's value is well defined (= all used variables are declared):
primrec defined :: "exp ⇒ env ⇒ bool" where
"defined (Let var init body) e = defined body (e(var ↦ init))"
| "defined (Var var) e = (var : dom e)"
| "defined (And a b) e = (defined a e ∧ defined b e)"
lemma var_intro: "defined exp env ⟹ defined exp (env(x ↦ init))"
apply (induct exp)
apply (simp_all split: if_splits)
2) The alternative approach is to collect all free variables from the expression. And if the expression doesn't contain any then we can add a new variable to the environment:
primrec freeVars :: "exp ⇒ vname set ⇒ vname set" where
"freeVars (Let var init body) e = freeVars body (insert var e)"
| "freeVars (Var var) e = (if var ∈ e then {} else {var})"
| "freeVars (And a b) e = freeVars a e ∪ freeVars b e"
lemma var_intro2: "freeVars exp {} = {} ⟹ freeVars exp {x} = {}"
apply (induct exp)
apply (simp_all split: if_splits)
3) And the last approach is to eliminate all bounded variables from the environment:
primrec isFree :: "vname ⇒ exp ⇒ bool" where
"isFree x (Let var init body) = (if var = x then False else isFree x body)"
| "isFree x (Var var) = (var = x)"
| "isFree x (And a b) = (isFree x a ∨ isFree x b)"
lemma var_elim: "¬ isFree x exp ⟹ val exp (env(x ↦ init)) = val exp (env)"
apply (induct exp)
apply (simp_all split: if_splits)
I can't prove any of the lemmas. Could you suggest a solution?
Your proofs will probably require you to set env to arbitrary in the induction or the proofs will not work. With that, you will probably be able to prove the properties you stated, but I think it'll be a bit ugly because both your definitions and your lemma statements are unnecessarily specific, which can make proofs more painful.
In particular, your notion of ‘free variable w.r.t. an environment’ seems a bit unnecessarily complicated to me. I think it's easier to use the following:
primrec freeVars :: "exp ⇒ vname set" where
"freeVars (Let var init body) = freeVars body - {var}"
| "freeVars (Var var) = {var}"
| "freeVars (And a b) = freeVars a ∪ freeVars b"
The statement ‘expression exp is well-defined w.r.t. an environment env’ is then simply freeVars exp ⊆ dom env.
Then it is obvious that any expression that is well-defined w.r.t. some environment is also well-defined with any bigger environment.
1) You have to lift the communicative property of element insertion on sets into that of state updates on maps, on which your lemma is based.
lemma defined_dom: "defined exp env ⟹ dom env = dom env' ⟹ defined exp env'"
by (induction exp arbitrary: env env'; auto)
lemma defined_comm: "defined exp (env(x↦a, y↦b)) ⟹ defined exp (env(y↦b, x↦a))"
by (auto elim!: defined_dom)
lemma var_intro: "defined exp env ⟹ defined exp (env(x ↦ init))"
by (induction exp arbitrary: env; simp add: defined_comm)
2) If your lemma is based on sets, you will also need the communicative lemma, which is already in the library:
lemma var_intro2': "freeVars exp s = {} ⟹ freeVars exp (insert x s) = {}"
by (induction exp arbitrary: s x; force simp: insert_commute)
lemma var_intro2: "freeVars exp {} = {} ⟹ freeVars exp {x} = {}"
using var_intro2' .
3) Similarly:
lemma var_elim: "¬ isFree x exp ⟹ val exp (env(x ↦ init)) = val exp (env)"
by (induction exp arbitrary: env; simp add: fun_upd_twist split: if_splits)
Does the following equality hold in Isabelle:
setprod f (UNIV :: 'n∷finite set) = setprod (λx. x) (f ` (UNIV :: 'n∷finite set))
If yes, how can I prove it?
(* tested with Isabelle2013-2 *)
theory Notepad
imports
Main
"~~/src/HOL/Library/Polynomial"
begin
notepad
begin{
fix f :: "'n∷finite ⇒ ('a::comm_ring_1 poly)"
have "finite (UNIV :: 'n∷finite set)" by simp
from this have "setprod f (UNIV :: 'n∷finite set) = setprod (λx. x) (f ` (UNIV :: 'n∷finite set))"
sorry (* can this be proven ? *)
The lemma holds only if you add the assumption inj f stating that f is injective. The lemma then follows from the library lemma setprod_reindex_id, which can be found using the command find_theorems setprod image.
setprod_reindex_id [unfolded id_def] gives you a generalized version of the lemma you asked for.
I have a partially-defined operator (disj_union below) on sets that I would like to lift to a quotient type (natq). Morally, I think this should be ok, because it is always possible to find in the equivalence class some representative for which the operator is defined [*]. However, I cannot complete the proof that the lifted definition preserves the equivalence, because disj_union is only partially defined. In my theory file below, I propose one way I have found to define my disj_union operator, but I don't like it because it features lots of abs and Rep functions, and I think it would be hard to work with (right?).
What is a good way to define this kind of thing using quotients in Isabelle?
theory My_Theory imports
"~~/src/HOL/Library/Quotient_Set"
begin
(* A ∪-operator that is defined only on disjoint operands. *)
definition "X ∩ Y = {} ⟹ disj_union X Y ≡ X ∪ Y"
(* Two sets are equivalent if they have the same cardinality. *)
definition "card_eq X Y ≡ finite X ∧ finite Y ∧ card X = card Y"
(* Quotient sets of naturals by this equivalence. *)
quotient_type natq = "nat set" / partial: card_eq
proof (intro part_equivpI)
show "∃x. card_eq x x" by (metis card_eq_def finite.emptyI)
show "symp card_eq" by (metis card_eq_def symp_def)
show "transp card_eq" by (metis card_eq_def transp_def)
qed
(* I want to lift my disj_union operator to the natq type.
But I cannot complete the proof, because disj_union is
only partially defined. *)
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union"
oops
(* Here is another attempt to define natq_add. I think it
is correct, but it looks hard to prove things about,
because it uses abstraction and representation functions
explicitly. *)
definition natq_add :: "natq ⇒ natq ⇒ natq"
where "natq_add X Y ≡
let (X',Y') = SOME (X',Y').
X' ∈ Rep_natq X ∧ Y' ∈ Rep_natq Y ∧ X' ∩ Y' = {}
in abs_natq (disj_union X' Y')"
end
[*] This is a little bit like how capture-avoiding substitution is only defined on the condition that bound variables do not clash; a condition that can always be satisfied by renaming to another representative in the alpha-equivalence class.
What about something like this (just an idea):
definition disj_union' :: "nat set ⇒ nat set ⇒ nat set"
where "disj_union' X Y ≡
let (X',Y') = SOME (X',Y').
card_eq X' X ∧ card_eq Y' Y ∧ X' ∩ Y' = {}
in disj_union X' Y'"
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union'" oops
For the record, here is Ondřej's suggestion (well, a slight amendment thereof, in which only one of the operands is renamed, not both) carried out to completion...
(* A version of disj_union that is always defined. *)
definition disj_union' :: "nat set ⇒ nat set ⇒ nat set"
where "disj_union' X Y ≡
let Y' = SOME Y'.
card_eq Y' Y ∧ X ∩ Y' = {}
in disj_union X Y'"
(* Can always choose a natural that is not in a given finite subset of ℕ. *)
lemma nats_infinite:
fixes A :: "nat set"
assumes "finite A"
shows "∃x. x ∉ A"
proof (rule ccontr, simp)
assume "∀x. x ∈ A"
hence "A = UNIV" by fast
hence "finite UNIV" using assms by fast
thus False by fast
qed
(* Can always choose n naturals that are not in a given finite subset of ℕ. *)
lemma nat_renaming:
fixes x :: "nat set" and n :: nat
assumes "finite x"
shows "∃z'. finite z' ∧ card z' = n ∧ x ∩ z' = {}"
using assms
apply (induct n)
apply (intro exI[of _ "{}"], simp)
apply (clarsimp)
apply (rule_tac x="insert (SOME y. y ∉ x ∪ z') z'" in exI)
apply (intro conjI, simp)
apply (rule someI2_ex, rule nats_infinite, simp, simp)+
done
lift_definition natq_add :: "natq ⇒ natq ⇒ natq"
is "disj_union'"
apply (unfold disj_union'_def card_eq_def)
apply (rule someI2_ex, simp add: nat_renaming)
apply (rule someI2_ex, simp add: nat_renaming)
apply (metis card.union_inter_neutral disj_union_def empty_iff finite_Un)
done