Avoiding assumption with sledgehammer - isabelle

Please how I can avoid some assumptions or some lemmas when I use sledeghammer? is there such away to do that? becuase I'm sure that there is another methods or lemmas to solve my sublemma.

Facts marked with the attribute [no_atp] will not be passed to Sledgehammer. So if you declare your lemmas or assumptions with [no_atp], then sledgehammer should not be able to use them.

Related

How does Sledgehammer translate lambda-abstractions to ATPs?

In Extending Sledgehammer with SMT solvers there is this claim:
Lambda-abstractions are rewritten to Turner combinators or transformed into explicit functions using lambda-abstractions.
The linked reference, Translating higher-order clauses to first-order clauses does not clarify how are this method synchronized. Do we use both of them always? Is one preferred to the other?
According to mediatum.ub.tum.de/doc/1097834/1097834.pdf the choice of method is tailored for each prover.

Reordering goals (Isabelle)

I would like to know how to reorder goals in the following situation:
lemma "P=Q"
proof (rule iffI, (*here I would like to swap goal order*), rule ccontr)
oops
I would like a solution that doesn't involve changing the lemma statement. I realise that prefer and defer can be used in apply-style proofs, but I would like to have a method that can be used in the proof (...) part.
Edit:
As Andreas Lochbihler says, writing rule iffI[rotated] works in the above example. However, is it possible to swap the goal order in the following situation without changing the statement of the lemma?
lemma "P==>Q" "Q==>P"
proof ((*here I would like to swap goal order*), rule ccontr)
oops
This example may seem contrived, but I feel that there may be situations where it is inconvenient to change the statement of the lemma, or it is necessary to swap goal order when there is no previous application of a rule such as iffI.
The order of the subgoals is determined by the order of the assumptions of the rule you apply. So it suffices to swap the assumptions of the iffI rule, e.g., using the attribute [rotated] as in
proof(rule iffI[rotated], rule ccontr)
In general, there is no proof method to change the order of the goals. And if you are thinking about using this with more sophisticated proof automation like auto, I'd heavily advise you against doing these kinds of things. Proof scripts with lots of automation in it should work independently of the order of the goals. Otherwise, your proofs will break easily when something in the proof automation setup changes.
However, a few low-level proof tactics allow to use explicit goal addressing (mostly those that end on _tac). For example,
proof(rule iffI, rule_tac [2] ccontr)
applies the ccontr rule to the second subgoal instead of the first.

Error message in Isabelle/HOL

When applying the wrong tactic or the wrong deduction rule, the error message is usually too general:
Failed to apply initial proof method⌂
I am using Isabelle to teach natural deduction. When Isabelle complains, some students change the rule/tactic arbitrary without reflecting on the possible causes of the error. A more detailed error message could be part of the learning process of Isabelle, I think.
How to make those error messages student friendly? Does that require editing the source code or can it be managed by defining more expressive tactics of natural deduction?
Tactics in Isabelle can be thought of as chainable non-deterministic transformations of the goal state. That means that the question of what specifically caused a tactic to fail is difficult to answer in general, and there is no mechanism to track such information in Isabelle's tactic system. However, one could relatively easily modify existing tactics such that they can optionally output some tracing information.
However, I have no idea what this information should be. There are simple tactics such as rule where the reason why applying it fails is always that the rule that it is given cannot be unified with the goal (and possibly chained facts), and there are similarly simple tactics like intro, drule, frule, erule, and elim. Such unification-related problems can be debugged quite well sometimes using declare [[unify_trace_failure]], which prints some tracing information every time a unification fails.
With simp and auto, the situation is much less clear because of how many different things these methods can do. Essentially, when the proof method could not be applied at all, it means that ‘none of the things that simp and auto can do worked for this goal’. For simp, this includes simplification, splitting, linear arithmetic, and probably a lot more things that I forgot. For auto, it additionally includes classical reasoning with a certain search depth. One cannot really say easily what specific thing went wrong when these methods fail.
Some specialised tactics do print more specific error messages if something goes wrong, e.g. sat and smt sometimes print a special error message when they have found a counterexample to the goal, but I cannot even imagine what more helpful output for something like simp or auto would look like. If you have an idea, please do tell me.
I think this problem cannot really be solved with error messages; one must simply get to know the system and the tactics one uses better and understand what they do and when they fail. Perhaps it would be good to have a kind of catalogue of commonly-used tactics that mentions these things.

Complete proof output in Isabelle

If Isabelle did not find a proof for a lemma, is it possible to output everything that was done by all the proof methods that were employed in order to arrive at the subgoals, at which they couldn't proceed any further ? This would help me see, at which avenues they got stuck, which then would help me to point them in the right direction.
(And also for completed proofs I would find it interesting to have a complete proof log that shows all the elementary inferences that were performed to proof some lemma.)
This question sounds similar to this one, which I answered a few days ago. Parts of that answer also apply here. Anyway, to answer this question specifically:
Not really. For most basic proof methods (rule et al., intro, fact, cases, induct) it is relatively straightforward what they do and when they fail, it is pretty much always because the rule they tried to apply does not unify with the goals/premises that they are given. (or they don't know which rule to apply in the first place)
You were probably thinking more of more automatic tactics like blast, force, simp, and auto. Most of them (blast, force, fastforce, fast, metis, meson, best, etc.) are ‘all-or-nothing’: They either solve the subgoal or they do nothing at all. It is therefore a bit tricky to find out where they get stuck and usually people use auto for this kind of exploration: You apply auto, look at the remaining subgoals, and think about what facts/parameters you could add in order to break down those more.
The situation with simp is similar, except that it does less than auto. simp is the simplifier, which uses term rewriting, custom rewriting procedures called simprocs, certain solvers (e.g. for linear arithmetic), and a few other convenient things like splitters to get rid of if expressions. auto is basically simp combined with classical reasoning, which makes it a lot more powerful than simp, but also less predictable. (and occasionally, auto does too much and thereby turns a provable goal into an unprovable goal)
There are some tracing tools (e.g. the simplifier trace, which is explained here). I thought there also was a way to trace classical reasoning, but I cannot seem to find it anymore; perhaps I was mistaken. In any case, tracing tools can sometimes help to explain unexpected behaviour, but I don't think they are the kind of thing you want to use here. The better approach is to understand what kinds of things these methods try, and then when simp or auto returns a subgoal, you can look at those and determine what you would have expected simp and auto to do next and why it didn't do that (usually because of some missing fact) and fix it.

Do monads do anything other than increase readability and productivity?

I have been looking at monads a lot over the past few months (functors and applicative functors as well). I have been attempting to figure out when monads are useful in a general sense. If I am looking at a piece of code I ask, should I employ a specific monad or a stack via transformers? In my efforts I think I have found an answer but I want others input in case I have missed something. It appears to me that monads are useful for abstracting away specific plumbing to increase readabilty/the declaritive nature of a piece of code which can have a side affect of increasing productivity by requiring less code to write. The only exception I can find is the IO monad which attempts to keep a pure function pure in the face of IO. It doesn't appear that a given monad provides a solution to a problem that can't be acheived via other means. Am I missing something?
Does any feature beyond mere Turing-completeness provide a solution to a problem that can't be achieved via other means? Nope. All Turing-equivalent languages are different ways of expressing the same basic things. Since monads are built out of more fundamental building blocks, obviously that set of building blocks is able to do anything a monad can. When we talk about a language or feature "allowing" us to do something, we mean it allows us to express that thing naturally, in a way that's easy to understand.

Resources