I'm currently trying to get into Eisbach.
How can I achieve that in the subgoal_tac (see below) the value
of the parameter A is used, and that A is not interpreted as some variable name? Is there some general way to do this or would this need special tailoring of the subgoal_tac tactic?
theory Scratch (* Isabelle2019 *)
imports
Main
"HOL-Eisbach.Eisbach"
begin
method test for A :: nat =
subgoal_tac "A = 5"
lemma "True"
apply (test 1)
(*
proof (prove)
goal (2 subgoals):
1. A = 5 ⟹ True
2. A = 5
*)
(* The A has a yellow background in the output pane*)
oops
end
I don't know why it does not work with subgoal_tac. I think I read somewhere that all methods ending with _tac are kind of deprecated now.
As a workaround you could use a Lemma:
method test for A :: nat =
(rule meta_mp[where P="A = 5"])
lemma "True"
apply (test 1)
(*
goal (2 subgoals):
1. 1 = 5 ⟹ True
2. 1 = 5
*)
Related
I'm currently trying use Isabelle/HOL's reification tactic. I'm unable to use different interpretation functions below quantifiers/lambdas. The below MWE illustrates this. The important part is the definition of the form function, where the ter call occurs below the ∀. When trying to use the reify tactic I get an Cannot find the atoms equation error. I don't get this error for interpretation functions which only call themselves under quantifiers.
I can't really reformulate my problem to avoid this. Does anybody know how to get reify working for such cases?
theory MWE
imports
"HOL-Library.Reflection"
begin
datatype Ter = V nat | P Ter Ter
datatype Form = All0 Ter
fun ter :: "Ter ⇒ nat list ⇒ nat"
where "ter (V n) vs = vs ! n"
| "ter (P t1 t2) vs = ter t1 vs + ter t2 vs"
fun form :: "Form ⇒ nat list ⇒ bool"
where "form (All0 t) vs = (∀ v . ter t (v#vs) = 0)" (* use of different interpretation function below quantifier *)
(*
I would expect this to reify to:
form (All0 (P (V 0) (V 0))) []
instead I get an error :-(
*)
lemma "∀ n :: nat . n + n = 0"
apply (reify ter.simps form.simps)
(* proof (prove)
goal (1 subgoal):
1. ∀n. n + n = n + n
Cannot find the atoms equation *)
oops
(* As a side note: the following example in src/HOL/ex/Reflection_Examples.thy (line 448, Isabelle2022) seems to be broken? For me, the reify invocation
doesn't change the goal at all. It uses quantifiers too, but only calls the same interpretation function under quantifiers and also doesn't throw an error,
so at least for me this seems to be unrelated to my problem.
*)
(*
lemma " ∀x. ∃n. ((Suc n) * length (([(3::int) * x + f t * y - 9 + (- z)] # []) # xs) = length xs) ∧ m < 5*n - length (xs # [2,3,4,x*z + 8 - y]) ⟶ (∃p. ∀q. p ∧ q ⟶ r)"
apply (reify Irifm.simps Irnat_simps Irlist.simps Irint_simps)
oops
*)
end
I'm a mathematician just starting to get used to Isabelle, and something that should be incredibly simple turned out to be frustrating. How do I define a function between two constants? Say, the function f: {1,2,3} \to {1,2,4} mapping 1 to 1, 2 to 4 and 3 to 2?
I suppose I managed to define the sets as constants t1 and t2 without incident, but (I guess since they're not datatypes) I can't try something like
definition f ::"t1 => t2" where
"f 1 = 1" |
"f 2 = 4" |
"f 3 = 2"
I believe there must be a fundamental misconception behind this difficulty, so I appreciate any guidance.
There's a number of aspects to your question.
First, to get something working quickly, use the fun keyword instead of definition, like so:
fun test :: "nat ⇒ nat" where
"test (Suc 0) = 1" |
"test (Suc (Suc 0)) = 4" |
"test (Suc (Suc (Suc 0))) = 2" |
"test _ = undefined"
You cannot pattern match on any arguments directly in the head of the definition using the definition keyword, whereas you can with fun. Note also that I have replaced the overloaded numeric literals (1, 2, 3, etc.) with the constructors for the nat datatype (0 and Suc) in the pattern match.
An alternative would be to stick with definition, but push the case analysis of the function's argument inside the body of the definition using a case statement, like so:
definition test2 :: "nat ⇒ nat" where
"test2 x ≡
case x of
(Suc 0) ⇒ 1
| (Suc (Suc 0)) ⇒ 4
| (Suc (Suc (Suc 0))) ⇒ 2
| _ ⇒ undefined"
Note that definitions like test2 are not unfolded by the simplifier by default, and you will need to manually add the theorem test2_def to the simplifier's simpset if you want to expand occurrences of test2 in a proof.
You can also define new types (you cannot use sets as types, directly, as you are trying to do) corresponding to your two three-element sets with typedef, but personally I would stick with nat.
EDIT: to use typedef, do something like:
typedef t1 = "{x::nat. x = 1 ∨ x = 2 ∨ x = 3}"
by auto
definition test :: "t1 ⇒ t1" where
"test x ≡
case (Rep_t1 x) of
| Suc 0 ⇒ Abs_t1 1
| Suc (Suc 0) ⇒ Abs_t1 4
| Suc (Suc (Suc 0)) ⇒ Abs_t1 2"
Though, I don't really ever use typedef myself, and so this may not be the best way of using this and others may possibly suggest some other way. What typedef does is carve out a new type from an existing one, by identifying a non-empty set of inhabitants for the new type. The proof obligation, here closed by auto, is merely to demonstrate that the defining set for the new type is indeed non-empty, and in this case I am carving out a three-element set of naturals into a new type, called t1, so the proof is fairly trivial. Two new constants are created, Abs_t1 and Rep_t1 which allow you to move back-and-forth between the naturals and the new type. If you put a print_theorems after the typedef command you will see several new theorems about t1 that Isabelle has automatically generated for you.
I would like to complete this proof.
How can I easily/elegantly use the values found by nitpick? (What to write at the ... part?)
Alternatively, how can I use the fact that nitpick found a counterexample to finish the proof?
lemma Nitpick_test: "¬(((a+b) = 5) ∧ ((a-b) = (1::int)))" (is "?P")
proof (rule ccontr)
assume "¬ ?P"
nitpick
(* Nitpicking formula...
Nitpick found a counterexample:
Free variables:
a = 3
b = 2
*)
show "False" by ...
qed
The theorem does not hold as stated, because if a = 3 and b = 2, the statement evaluates to False. For other values of a and b, however, the statement does hold. Thus, as a and b are implicitly universally quantified, you cannot prove the theorem as stated.
If you want to instead prove
theorem "EX a b. a + b = 5 & a - b = (1 :: int)"
you can use rule exI[where x="..."] to provide the witness ... for the existential quantifier, so 3 and 2 in this case.
I have the following code,
Here O is the charater O not zero 0
Module Playground1.
Inductive nat : Type :=
| O : nat
| S : nat → nat.
Definition pred (n : nat) : nat :=
match n with
| O ⇒ O
| S n' ⇒ n'
end.
End Playground1.
Definition minustwo (n : nat) : nat :=
match n with
| O ⇒ O
| S O ⇒ O
| S (S n') ⇒ n'
end.
Check (S (S (S (S O)))).
Eval compute in (minustwo 4).
I just want to know how it evaluates to 2? I mean how it is actually checking with a numeral and subtracting? I am not subtracting anything here, still it is working? I want to know what is the basic idea here? When I call minustwo 4 how coq know it is a numeral and how it is returning the result? How the matching is working here?
It is quite easy with Coq to follow step by step what is going on. But before we can do that, we need to know what your program looks like to Coq without all the syntactic sugar. To do that, type the following in your program:
Set Printing All.
If you now print minustwo, you will see that
Print minustwo
> match n return nat with
> | O => O
> | S n0 => match n0 return nat with
> | O => O
> | S n' => n'
> end
> end
your pattern match is actually broken up into two pattern matches.
Not let us see step by step how Coq evaluates minustwo 4. To do so, create the following theorem:
Goal (minustwo 4 = 2).
We don't care that much about the theorem itself, we care more about the fact that it contains the term minustwo 4. We can now simplify the expression step by step (you should run this in an ide to actually see what is going on).
First, we unfold the definition of minustwo, using a tactic called cbv delta.
cbv delta. (* unfold the definition of minustwo *)
We can now call the function, using the tactic cbv beta.
cbv beta. (* do the function call *)
We can now do the pattern match with
cbv iota; cbv beta. (* pattern match *)
And because Coq broke up the match into two, we get to do it again
cbv iota; cbv beta. (* pattern match *)
And that is why minustwo 4 is 2
reflexivity.
Qed.
I am not sure, but I think sometimes my proofs would be easier if I had a predecessor function, e.g., in case a variable is known not to be zero.
I don't know a good example, but perhaps here: { fix n have "(n::nat) > 0 ⟹ (∑i<n. f i) = Predecessor n" sorry }
Possibly because it is not a good idea, there is no predecessor function in the library.
Is there a way to simulate a predecessor function or similar?
I have thought of this example:
theorem dummy:
shows "1=1" (* dummy *)
proof-
(* Predecessor function *)
def pred == "λnum::nat. (∑i∈{ i . Suc i = num}. i)"
{fix n :: nat
from pred_def have "n>0 ⟹ Suc (pred n) = n"
apply(induct n)
by simp_all
}
show ?thesis sorry
qed
Your definition is unnecessarily complicated. Why do you not just write
def pred ≡ "λn::nat. n - 1"
Then you can have
have [simp]: "⋀n. n > 0 ⟹ Suc (pred n) = n" by (simp add: pred_def)
In the case of 0, the pred function then simply returns 0 and Suc (pred 0) = 0 obviously doesn't hold. You could also define pred ≡ "λn. THE n'. Suc n' = n". That would return the unique natural number whose successor is n if such a number exists (i.e. if n > 0) and undefined (i.e. some natural number you know nothing about) otherwise. However, I would argue that in this case, it is much easier and sensible to just do pred ≡ λn::nat. n - 1.
I would suspect that in most cases, you can simply forgo the pred function and write n - 1; however, I do know that it is sometimes good to have the - 1 “protected” by a definition. In these cases, I usually def a variable n' as n - 1 and prove Suc n' = n – basically the same thing. In my opinion, seeing as proving this takes only one line, it does not really merit a definition of its own, such as this pred function, but one could make a reasonable case for it, I guess.
Another thing: I've noticed you use lemma "1 = 1" as some kind of dummy environment to do Isar proofs in. I would like to point out the existence of notepad, which exists precisely for that use case and that can be used as follows:
notepad
begin
have "some fact" by something
end