Isabelle: Opposite of "intro impI" - isabelle

If my goal state is foo ==> bar --> qux, I know that I can use the statement
apply (intro impI)
to yield the goal state foo ==> bar ==> qux. What about the other direction? Which command will send me back to the goal state foo ==> bar --> qux?
The best I have come up with so far is
apply (rule_tac P="bar" in rev_mp, assumption, thin_tac "bar")
but that's rather clunky, and I'd like to learn if there's a nicer way.

Instead of
apply (rule_tac P="bar" in rev_mp, assumption, thin_tac "bar")
you can write
apply (erule_tac P="bar" in rev_mp)
where erule_tac eliminates the matched assumption for you so you don't need the thin_tac anymore.

I am assuming you want to stay in apply-style. Then this is just a puzzle with probably many possible solutions. Here is one:
apply (unfold atomize_imp, rule)
or a bit more explicit
apply (unfold atomize_imp, rule impI)
where the unfold atomize_imp replaces all occurrences of ==> by -->. Then, in general, you can specify the number of --> that should be replaced by ==> (starting from the left), by a corresponding number of rule (or rule impI).
Anyway, if you would use Isar-style than you just state what you want to have explicitly and almost any automatic tool will be able to fill in the rest.

Related

How do I get turnstile to work in isabelle?

I was wondering how do I get turnstile to work in Isabelle 2017. I new to the program and have been able to work some thereoms but I can't figure out how to get the turnstile symbol to work. Do I have change imports or is there something else I have to do?
Thanks.
Okay, so it seems like you want to define some custom syntax involving the turnstile symbol.
This is described in Sections 8.2 and 8.3 of the Isabelle/Isar reference manual (and some more advanced stuff in Sections 8.5.2 and 8.5.3). You can do a lot of fancy custom syntax with Isabelle: the list syntax [1, 2, 3] for Cons 1 (Cons 2 (Cons 3 Nil)), for example, is defined entirely in ‘user space’, as is the list comprehension syntax [x + y. x ← xs, y ← ys, x ≠ y]. These are pretty complicated.
However, in most cases, you only need a very small fragment of all that power: As outline in Section 8.2 of isar-ref, you can annotate syntax directly to constants as you define them (e.g. with definition, primrec, fun, datatype) by either using something like infixl, infixr, or binder, or by directly providing a mixfix syntax specification. The tricky part here is often defining the precedences so that they don't clash with other stuff.
If you get this wrong, you will get warnings (not upon defining the syntax, but upon using it) telling you that there are ambiguous parse trees and that you should perhaps try to disambiguate your syntax by providing better mixfix priorities.
For examples of how this looks in practice you can look, well, basically at any Isabelle syntax you already know by going to where it is defined and looking at the mixfix specification.
One place that comes to my mind is the ~~/src/HOL/IMP directory, in particular the files Hoare.thy and Types.thy. They define some custom syntax, some of which even includes the turnstile symbol.

How to manage all the various proof methods

Is there a "generic" informal algorithm that users of Isabelle follow, when they are trying to prove something that isn't proved immediately by auto or sledgehammer? A kind of general way of figuring out, if auto needs additional lemmas, formulated by the user, to succeed or if better some other proof method is used.
A related question is: Is there maybe a table to be found somewhere with all the proof methods together with the context in which to apply them? When I'm reading through the Programming and Proving tutorial, the description of various methods (respectively variants of some methods, such as the many variant of auto) are scattered through the text, which constantly makes me go back and for between text and Isabelle code (which also leads to forgetting what exactly is used for what) and which results in a very inefficient workflow.
No, there's no "generic" informal way. You can use try0 which tries all standard proof methods (like auto, blast, fastforce, …) and/or sledgehammer which is more advanced.
After that, the fun part starts.
Can this theorem be shown with simpler helper lemmas? You can use the command "sorry" for assuming that a lemma is true.
How would I prove this on a piece of paper? And then try to do this proof in Isabelle.
Ask for help :) Lots of people on stack overflow, #isabelle on freenode and the Isabelle mailing list are waiting for your questions.
For your second question: No, there's no such overview. Maybe someone should write one, but as mentioned before you can simply use try0.
ammbauer's answer already covers lots of important stuff, but here are some more things that may help you:
When the automation gets stuck at a certain point, look at the available premises and the goal at that point. What kind of simplification did you expect the system to do at that point? Why didn't it do it? Perhaps the corresponding rule is just not in the simp set (add it with simp add:) or some preconditions of the rule could not be proved (in that case, add enough facts so that they can be proved, or do it yourself in an additional step)
Isar proofs are good. If you have some complicated goal, try breaking it down into smaller steps in Isar. If you have bigger auxiliary facts that may even be of more general interest, try pulling them out as auxiliary lemmas. Perhaps you can even generalise them a bit. Sometimes that even simplifies the proof.
In the same vein: Too much information can confuse both you and Isabelle. You can introduce local definitions in Isar with define x where "x = …" and unfold them with x_def. This makes your goals smaller and cleaner and decreases the probability of the automation going down useless paths in its proof search.
Isabelle does not automatically unfold definitions, so if you have a definition, and you want to unfold it for a proof, you have to do that yourself by using unfolding foo_def or simp add: foo_def.
The defining equations of functions defined with fun or primrec are unfolding by anything using the simplifier (simp, simp_all, force, auto) unless the equations (foo.simps) have manually been deleted from the simp set. (by lemmas [simp del] = foo.simps or declare [simp del] foo.simps)
Different proof methods are good at different things, and it takes some experience to know what method to use in what case. As a general rule, anything that requires only rewriting/simplification should be done with simp or simp_all. Anything related to classical reasoning (i.e. first-order logic or sets) calls for blast. If you need both rewriting and classical reasoning, try auto or force. Think of auto as a combination of simp and blast, and force is like an ‘all-or-nothing’ variant of auto that fails if it cannot solve the goal entirely. It also tries a little harder than auto.
Most proof methods can take options. You probably already know add: and del: for simp and simp_all, and the equivalent simp:/simp del: for auto. However, the classical reasoners (auto, blast, force, etc.) also accept intro:, dest:, elim: and the corresponding del: options. These are for declaring introduction, destruction, and elimination rules.
Some more information on the classical reasoner:
An introduction rule is a rule of the form P ⟹ Q ⟹ R that should be used whenever the goal has the form R, to replace it with P and Q
A destruction rule is a rule of the form P ⟹ Q ⟹ R that should be used whenever a fact of the form P is in the premises to replace to goal G with the new goals Q and R ⟹ G.
An elimination rule is something like thm exE (elimination of the existential quantifier). These are like a generalisation of destruction rules that also allow introducing new variables. These rules often appear in this like case distinctions.
The classical reasoner used by auto, blast, force etc. will use the rules in the claset (i.e. that have been declared intro/dest/elim) automatically whenever appropriate. If doing that does not lead to a proof, the automation will backtrack at some point and try other rules. You can disable backtracking for specific rules by using intro!: instead of intro: (and analogously for the others). Then the automation will apply that rule whenever possible without ever looking back.
The basic proof methods rule, drule, erule correspond to applying a single intro/dest/elim rule and are good for single step reasoning, e.g. in order to find out why automatic methods fail to make progress at a certain point. intro is like rule but applies the set of rules it is given iteratively until it is no longer possible.
safe and clarify are occasionally useful. The former essentially strips away quantifiers and logical connectives (try it on a goal like ∀x. P x ∧ Q x ⟶ R x) and the latter similarly tries to ‘clean up’ the goal. (I forgot what it does exactly, I just use it occasionally when I think it might be useful)

General way to apply an arbitrary method to all subgoals?

Suppose I have a list of subgoals in an apply style proof. I know that something like
apply blast
will provide a proof for a number of the subgoals within this list. Is there a way I can avoid duplicating this line?
For example, suppose I have three subgoals where the first and the third are provable using the above method while the second is provable with something like
apply (metis lemma1 lemma2 ...)
A naive proof for such subgoals will look like
apply blast
apply (metis lemma1 lemma2 ...)
apply blast
What I am looking for is a way to give a proof without duplicating the apply blast portion of the proof. Observe that using the method combinator + will not achieve this; it merely applies the method repeatedly until the first failure.
Actually apply blast will only try to solve the first subgoal. If you want to solve as many subgoals as possible you could try
apply blast+
I am not sure what exactly you are trying to achieve, but an alternative to your using some_lemma might be
apply (insert some_lemma)
which inserts some_lemma as additional assumption of all of your subgoals.
Update: There are some basic proof method combinators available in Isabelle (see also Section 6.4.1: Proof method expressions, of isar-ref). So you could do for example
apply (blast | metis ...)+
which will first try to solve a subgoal by blast and only if this fails by metis .... However, its usefulness depends on the specific subgoal situation, e.g., if blast takes a long time before failing, it might not be suitable. More fine-grained control of proof methods is available through the recent Isabelle/Eisbach proof method language (see isabelle doc eisbach).

Apply simplifier to arbitrary term

I have a term in mind, say "foo 1 2 a b", and I'd like to know if Isabelle can simplify it for me. I'd like to write something like
simplify "foo 1 2 a b"
and have the simplified term printed in the output buffer. Is this possible?
My current 'workaround' is:
lemma "foo 1 2 a b = blah"
apply simp
which works fine but looks a bit hacky.
What doesn't work (in my case) is:
value "foo 1 2 a b"
because a and b are unbound variables, and because my foo involves infinite sets and other fancy stuff, which the code generator chokes on.
There is no built-in feature AFAIK, but there are several ways to achieve this. You have already discovered one of them, namely state the term as a lemma and then invoke the simplifier. The drawback is that this cannot be used in all contexts, for example, not inside an apply proof script.
Alternatively, you can invoke the simplifier via the attribute [simplified]. This works in all contexts via the thm command and produces the output in the output buffer. First, the term must be injected into a theorem, then you can apply simplify to the theorem and display the result with thm. Here is the preparatory stuff that can go into your theory of miscellaneous stuff.
definition simp :: "'a ⇒ bool" where "simp _ = True"
notation (output) simp ("_")
lemma simp: "simp x" by(simp add: simp_def)
Then, you can write
thm simp[of "foo 1 2 a b", simplified]
and see the simplified term in the output window.
The evaluation mechanism is probably not what you want, because evaluation uses a different set of rewrite rules (namely the code equations) than the simplifier normally uses (the simpset). Therefore, this is likely to evaluate to a different term than by applying the simplifier. To see the difference, apply code_simp instead of simp in your approach with lemma "foo 1 2 a b = blah". The proof method code_simp uses the code equations just like value [simp] used to.
When using the value command, the evaluation of the argument is conducted by registered evaluators (see the Reference Manual of Isabelle2013-2).
It used to be possible to explicitly choose an evaluator in previous versions of Isabelle (e.g., Isabelle2013-2) by giving an extra argument to the value command. E.g.,
value [simp] "foo 1 2 a b"
It seems that in Isabelle2014 this parameter was dropped and according to the Reference Manual of Isabelle2014, the strategy is now fixed to first use ML code generation and in case this fails, normalization by evaluation.
From the NEWS file in the development version (e82c72f3b227) of Isabelle it seems as if this parameter will be enabled again in the upcoming Isabelle release.
UPDATE: As Andreas pointed out, value [simp] does not use the same set of simplification rules as apply simp. So even if available, the solution I described above will most likely not yield the result you want.

Mapping over sequence with a constant

If I need to provide a constant value to a function which I am mapping to the items of a sequence, is there a better way than what I'm doing at present:
(map my-function my-sequence (cycle [my-constant-value]))
where my-constant-value is a constant in the sense that it's going to be the same for the mappings over my-sequence, although it may be itself a result of some function further out. I get the feeling that later I'll look at what I'm asking here and think it's a silly question because if I structured my code differently it wouldn't be a problem, but well there it is!
In your case I would use an anonymous function:
(map #(my-function % my-constant-value) my-sequence)
Using a partially applied function is another option, but it doesn't make much sense in this particular scenario:
(map (partial my-function my-constant-value) my-sequence)
You would (maybe?) need to redefine my-function to take the constant value as the first argument, and you don't have any need to accept a variable number of arguments so using partial doesn't buy you anything.
I'd tend to use partial or an anonymous function as dbyrne suggests, but another tool to be aware of is repeat, which returns an infinite sequence of whatever value you want:
(map + (range 4) (repeat 10))
=> (10 11 12 13)
Yet another way that I find sometimes more readable than map is the for list comprehension macro:
(for [x my-sequence]
(my-function x my-constant-value))
yep :) a little gem from the "other useful functions" section of the api constantly
(map my-function my-sequence (constantly my-constant-value))
the pattern of (map compines-data something-new a-constant) is rather common in idomatic clojure. its relativly fast also with chunked sequences and such.
EDIT: this answer is wrong, but constantly and the rest of the "other useful functions" api are so cool i would like to leave the reference here to them anyway.

Resources