Is there a prover just for propositional logic - isabelle

I tried to implement LTL logic syntactically using the axiomatization command, with the purpose of automatically finding proofs for theorems (motivation of proving program properties).
However the automatic provers such as (cvc4, z3, e, etc) all use quantifiers of some sort. For example using FOL one could prove F(p)-->G(p) which is obviously false.
My question is if there exists a prover, just like the ones mentioned, but that is made for propositional logic, i.e. only has access to MP and the propositional logic axioms.
I am rather new to isabelle so there might be an easier way of doing this im not seeing.
EDIT: I am looking for a hilbert style deduction prover and not a SAT as this would defeat the problem of implementing it axiomatically

I think the sat method only uses propositional logic.
However, I would recommend not to use axiomatizations and just define the syntax of LTL using datatypes and the semantics using functions. Maybe you can reuse the formalization from https://www.isa-afp.org/entries/LTL.html
Without axiomatizations you are then free to use any method.

What you want is a SAT solver, such as minisat.
However the automatic provers such as (cvc4, z3, e, etc) all use quantifiers of some sort. For example using FOL one could prove F(p)-->G(p) which is obviously false.
This is not correct. Any first-order theorem prover, like iProver, E, Vampire, will not prove forall X. f(X) => g(x).

Related

what's the distinction between `shows` and `obtains` in Isabelle Isar?

I am trying to understand the difference between the shows and obtains commands in Isar (as of Isabelle 2020). The documentation in isar-ref.pdf (pp 137.) seems to have some typo and confuses me.
...
Moreover, there are two kinds of conclusions: shows states several
simultaneous propositions (essentially a big conjunction), while
obtains claims several simultaneous simultaneous contexts of
(essentially a big disjunction of eliminated parameters and
assumptions, cf. §6.6).
shows seems straight forward.
From the limited experience I have so far, it seems that obtains is about proving a conclusion that begins with an existential quantifier, as shown in this question (where the conclusion is existential and then the goal is a obtains).
Is this really the distinction between shows and obtains (universal vs existential)?
If not, what is the proper intended use of obtains?
The lemmas "shows ‹∃x. P x›" and "obtains x where ‹P x›` are very similar, but not entirely identical.
In terms of proofs, the obtain version requires to find an explicit witness (look the fact called that in such a proof). Something similar can be achieved by applying the theorem exI after the shows.
The generated lemmas are different. The obtains version generates an elimination rule instead of a quantified, because there is no existential quantifier in Pure. However, the difference rarely matters when using the theorem.

"Efficient" least- and greatest fixpoint computations?

I am trying to compute two finite sets of some enumerable type (let's say char) using a least- and greatest- fixpoint computation, respectively. I want my definitions to be extractable to SML, and to be "semi-efficient" when executed. What are my options?
From exploring the HOL library and playing around with code generation, I have the following observations:
I could use the complete_lattice.lfp and complete_lattice.gfp constants with a pair of additional monotone functions to compute my sets, which in fact I currently am doing. Code generation does work with these constants, but the code produced is horribly inefficient, and if I understand the generated SML code correctly is performing an exhaustive search over every possible set in the powerset of characters. Any use, no matter how simple, of these two constants at type char therefore causes a divergence when executed.
I could try to make use of the iterative fixpoint described by the Kleene fixpoint theorem in directed complete partial orders. From exploring, there's a ccpo_class.fixp constant in the theory Complete_Partial_Order, but the underlying iterates constant that this is defined in terms of has no associated code equations, and so code cannot be extracted.
Are there any existing fixpoint combinators hiding somewhere, suitable for use with finite sets, that produce semi-efficient code with code generation that I have missed?
None of the general fixpoint combinators in Isabelle's standard library is meant to used directly for code extraction because their construction is too general to be usable in practice. (There is another one in the theory ~~/src/HOL/Library/Bourbaki_Witt_Fixpoint.) But the theory ~~/src/HOL/Library/While_Combinator connects the lfp and gfp fixpoints to the iterative implementation you are looking for, see theorems lfp_while_lattice and gfp_while_lattice. These characterisations have the precondition that the function is monotone, so they cannot be used as code equations directly. So you have two options:
Use the while combinator instead of lfp/gfp in your code equations and/or definitions.
Tell the code preprocessor to use lfp_while_lattice as a [code_unfold] equation. This works if you also add all the rules that the preprocessor needs to prove the assumptions of these equations for the instances at which it should apply. Hence, I recommend that you also add as [code_unfold] the monotonicity statement of your function and the theorem to prove the finiteness of char set, i.e., finite_class.finite.

How to manage all the various proof methods

Is there a "generic" informal algorithm that users of Isabelle follow, when they are trying to prove something that isn't proved immediately by auto or sledgehammer? A kind of general way of figuring out, if auto needs additional lemmas, formulated by the user, to succeed or if better some other proof method is used.
A related question is: Is there maybe a table to be found somewhere with all the proof methods together with the context in which to apply them? When I'm reading through the Programming and Proving tutorial, the description of various methods (respectively variants of some methods, such as the many variant of auto) are scattered through the text, which constantly makes me go back and for between text and Isabelle code (which also leads to forgetting what exactly is used for what) and which results in a very inefficient workflow.
No, there's no "generic" informal way. You can use try0 which tries all standard proof methods (like auto, blast, fastforce, …) and/or sledgehammer which is more advanced.
After that, the fun part starts.
Can this theorem be shown with simpler helper lemmas? You can use the command "sorry" for assuming that a lemma is true.
How would I prove this on a piece of paper? And then try to do this proof in Isabelle.
Ask for help :) Lots of people on stack overflow, #isabelle on freenode and the Isabelle mailing list are waiting for your questions.
For your second question: No, there's no such overview. Maybe someone should write one, but as mentioned before you can simply use try0.
ammbauer's answer already covers lots of important stuff, but here are some more things that may help you:
When the automation gets stuck at a certain point, look at the available premises and the goal at that point. What kind of simplification did you expect the system to do at that point? Why didn't it do it? Perhaps the corresponding rule is just not in the simp set (add it with simp add:) or some preconditions of the rule could not be proved (in that case, add enough facts so that they can be proved, or do it yourself in an additional step)
Isar proofs are good. If you have some complicated goal, try breaking it down into smaller steps in Isar. If you have bigger auxiliary facts that may even be of more general interest, try pulling them out as auxiliary lemmas. Perhaps you can even generalise them a bit. Sometimes that even simplifies the proof.
In the same vein: Too much information can confuse both you and Isabelle. You can introduce local definitions in Isar with define x where "x = …" and unfold them with x_def. This makes your goals smaller and cleaner and decreases the probability of the automation going down useless paths in its proof search.
Isabelle does not automatically unfold definitions, so if you have a definition, and you want to unfold it for a proof, you have to do that yourself by using unfolding foo_def or simp add: foo_def.
The defining equations of functions defined with fun or primrec are unfolding by anything using the simplifier (simp, simp_all, force, auto) unless the equations (foo.simps) have manually been deleted from the simp set. (by lemmas [simp del] = foo.simps or declare [simp del] foo.simps)
Different proof methods are good at different things, and it takes some experience to know what method to use in what case. As a general rule, anything that requires only rewriting/simplification should be done with simp or simp_all. Anything related to classical reasoning (i.e. first-order logic or sets) calls for blast. If you need both rewriting and classical reasoning, try auto or force. Think of auto as a combination of simp and blast, and force is like an ‘all-or-nothing’ variant of auto that fails if it cannot solve the goal entirely. It also tries a little harder than auto.
Most proof methods can take options. You probably already know add: and del: for simp and simp_all, and the equivalent simp:/simp del: for auto. However, the classical reasoners (auto, blast, force, etc.) also accept intro:, dest:, elim: and the corresponding del: options. These are for declaring introduction, destruction, and elimination rules.
Some more information on the classical reasoner:
An introduction rule is a rule of the form P ⟹ Q ⟹ R that should be used whenever the goal has the form R, to replace it with P and Q
A destruction rule is a rule of the form P ⟹ Q ⟹ R that should be used whenever a fact of the form P is in the premises to replace to goal G with the new goals Q and R ⟹ G.
An elimination rule is something like thm exE (elimination of the existential quantifier). These are like a generalisation of destruction rules that also allow introducing new variables. These rules often appear in this like case distinctions.
The classical reasoner used by auto, blast, force etc. will use the rules in the claset (i.e. that have been declared intro/dest/elim) automatically whenever appropriate. If doing that does not lead to a proof, the automation will backtrack at some point and try other rules. You can disable backtracking for specific rules by using intro!: instead of intro: (and analogously for the others). Then the automation will apply that rule whenever possible without ever looking back.
The basic proof methods rule, drule, erule correspond to applying a single intro/dest/elim rule and are good for single step reasoning, e.g. in order to find out why automatic methods fail to make progress at a certain point. intro is like rule but applies the set of rules it is given iteratively until it is no longer possible.
safe and clarify are occasionally useful. The former essentially strips away quantifiers and logical connectives (try it on a goal like ∀x. P x ∧ Q x ⟶ R x) and the latter similarly tries to ‘clean up’ the goal. (I forgot what it does exactly, I just use it occasionally when I think it might be useful)

Isabelle/HOL foundations

I have seen a lot of documentation about Isabelle's syntax and proof strategies. However, little have I found about its foundations. I have a few questions that I would be very grateful if someone could take the time to answer:
Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
What symbols are part of Isabelle's meta-language? I read that there are symbols in the meta-language for Universal Quantification (/\) and for implication (==>). However, these symbols have their counterpart in the object-level language (∀ and -->). I understand that --> is an object-level function of type bool => bool => bool. However, how are ∀ and ∃ defined? Are they object-level Boolean functions? If so, they are not computable (considering infinite domains). I noticed that I am able to write Boolean functions in therms of ∀ and ∃, but they are not computable. So what are ∀ and ∃? Are they part of the object-level? If so, how are they defined?
Are Isabelle theorems just Boolean expressions? Then Booleans are part of the meta-language?
As far as I know, Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Sorry if these questions are very basic. I do not seem to find a good tutorial on Isabelle's meta-theory. I would love if someone could recommend me a good tutorial on these topics.
Thank you very much.
You can define non-terminating (i.e. partial) functions in Isabelle (cf. Function package manual (section 8)). However, partial functions are more difficult to reason about, because whenever you want to use its definition equations (the psimps rules, which replace the simps rules of a normal function), you have to show that the function terminates on that particular input first.
In general, things like non-definedness and non-termination are always problematic in a logic – consider, for instance, the function ‘definition’ f x = f x + 1. If we were to take this as an equation on ℤ (integers), we could subtract f x from both sides and get 0 = 1. In Haskell, this problem is ‘solved’ by saying that this is not an equation on ℤ, but rather on ℤ ∪ {⊥} (the integers plus bottom) and the non-terminating function f evaluates to ⊥, and ‘⊥ + 1 = ⊥’, so everything works out fine.
However, if every single expression in your logic could potentially evaluate to ⊥ instead of a ‘proper‘ value, reasoning in this logic will become very tedious. This is why Isabelle/HOL chooses to restrict itself to total functions; things like partiality have to be emulated with things like undefined (which is an arbitrary value that you know nothing about) or option types.
I'm not an expert on Isabelle/Pure (the meta logic), but the most important symbols are definitely
⋀ (the universal meta quantifier)
⟹ (meta implication)
≡ (meta equality)
&&& (meta conjunction, defined in terms of ⟹)
Pure.term, Pure.prop, Pure.type, Pure.dummy_pattern, Pure.sort_constraint, which fulfil certain internal functions that I don't know much about.
You can find some information on this in the Isabelle/Isar Reference Manual in section 2.1, and probably more elsewhere in the manual.
Everything else (that includes ∀ and ∃, which indeed operate on boolean expressions) is defined in the object logic (HOL, usually). You can find the definitions, of rather the axiomatisations, in ~~/src/HOL/HOL.thy (where ~~ denotes the Isabelle root directory):
All_def: "All P ≡ (P = (λx. True))"
Ex_def: "Ex P ≡ ∀Q. (∀x. P x ⟶ Q) ⟶ Q"
Also note that many, if not most Isabelle functions are typically not computable. Isabelle is not a programming language, although it does have a code generator that allows exporting Isabelle functions as code to programming languages as long as you can give code equations for all the functions involved.
3)
Isabelle theorems are a complex datatype (cf. ~~/src/Pure/thm.ML) containing a lot of information, but the most important part, of course, is the proposition. A proposition is something from Isabelle/Pure, which in fact only has propositions and functions. (and itself and dummy, but you can ignore those).
Propositions are not booleans – in fact, there isn't even a way to state that a proposition does not hold in Isabelle/Pure.
HOL then defines (or rather axiomatises) booleans and also axiomatises a coercion from booleans to propositions: Trueprop :: bool ⇒ prop
Isabelle is not a programming language, and apart from that, totality does not mean you have to restrict yourself to finite structures. Even in a total programming language, you can have infinite lists. (cf. Idris's codata)
Isabelle is a theorem prover, and logically, infinite objects can be treated by axiomatising them and then reasoning about them using the axioms and rules that you have.
For instance, HOL assumes the existence of an infinite type and defines the natural numbers on that. That already gives you access to functions nat ⇒ 'a, which are essentially infinite lists.
You can also define infinite lists and other infinite data structures as codatatypes with the (co-)datatype package, which is based on bounded natural functors.
Let me add some points to two of your questions.
1) Why doesn't Isabelle/HOL admit functions that do not terminate? Many other languages such as Haskell do admit non-terminating functions.
In short: Isabelle/HOL does not require termination, but totality (i.e., there is a specific result for each input to the function) of functions. Totality does not mean that a function is actually terminating when transcribed to a (functional) programming language or even that it is computable at all.
Therefore, talking about termination is somewhat misleading, even though it is encouraged by the fact that Isabelle/HOL's function package uses the keyword termination for proving some property P about which I will have to say a little more below.
On the one hand the term "termination" might sound more intuitive to a wider audience. On the other hand, a more precise description of P would be well-foundedness of the function's call graph.
Don't get me wrong, termination is not really a bad name for the property P, it is even justified by the fact that many techniques that are implemented in the function package are very close to termination techniques from term rewriting or functional programming (like the size-change principle, dependency pairs, lexicographic orders, etc.).
I'm just saying that it can be misleading. The answer to why that is the case also touches on question 4 of the OP.
4) As far as I know Isabelle is a strict programming language. How can I use infinite objects? Let's say, infinite lists. Is it possible in Isabelle/HOL?
Isabelle/HOL is not a programming language and it specifically does not have any evaluation strategy (we could alternatively say: it has any evaluation strategy you like).
And here is why the word termination is misleading (drum roll): if there is no evaluation strategy and we have termination of a function f, people might expect f to terminate independent of the used strategy. But this is not the case. A termination proof of a function rather ensures that f is well-defined. Even if f is computable a proof of P merely ensures that there is an evaluation strategy for which f terminates.
(As an aside: what I call "strategy" here, is typically influenced by so called cong-rules (i.e., congruence rules) in Isabelle/HOL.)
As an example, it is trivial to prove that the function (see Section 10.1 Congruence rules and evaluation order in the documentation of the function package):
fun f' :: "nat ⇒ bool"
where
"f' n ⟷ f' (n - 1) ∨ n = 0"
terminates (in the sense defined by termination) after adding the cong-rule:
lemma [fundef_cong]:
"Q = Q' ⟹ (¬ Q' ⟹ P = P') ⟹ (P ∨ Q) = (P' ∨ Q')"
by auto
Which essentially states that logical-or should be "evaluated" from right to left. However, if you write the same function e.g. in OCaml it causes a stack overflow ...
EDIT: this answer is not really correct, check out Lars' comment below.
Unfortunately I don't have enough reputation to post this as a comment, so here is my go at an answer (please bear in mind I am no expert in Isabelle, but I also had similar questions once):
1) The idea is to prove statements about the defined functions. I am not sure how familiar you are with Computability Theory, but think about the Halting Problem and the fact most undeciability problems stem from it (such as Acceptance Problem). Imagine defining a function which you can't prove it terminates. How could you then still prove it returns the number 42 when given input "ABC" and it doesn't go in an infinite loop?
If instead you limit yourself to terminating functions, you can prove much more about them, essentially making a trade-off (or at least this is how I see it).
These ideas stem from Constructivism and Intuitionism and I recommend you check out Robert Harper's very interesting lecture series: https://www.youtube.com/watch?v=9SnefrwBIDc&list=PLGCr8P_YncjXRzdGq2SjKv5F2J8HUFeqN on Type Theory
You should check out especially the part about the absence of the Law of Excluded middle: http://youtu.be/3JHTb6b1to8?t=15m34s
2) See Manuel's answer.
3,4) Again see Manuel's answer keeping in mind Intuitionistic logic: "the fundamental entity is not the boolean, but rather the proof that something is true".
For me it took a long time to get adjusted to this way of thinking and I'm still not sure I understand it. I think the key though is to understand it is a more-or-less completely different way of thinking.

Difference between Definition and Let in Coq

What is the difference between a Defintion and 'Let' in Coq? Why do some definitions require proofs?
For eg. This is a piece of code from g1.v in Group theory.
Definition exp : Z -> U -> U.
Proof.
intros n a.
elim n; clear n.
exact e.
intro n.
elim n; clear n.
exact a.
intros n valrec.
exact (star a valrec).
intro n; elim n; clear n.
exact (inv a).
intros n valrec.
exact (star (inv a) valrec).
Defined.
What is the aim of this proof?
I think what you're asking isn't really related to the difference between the Definition and Let commands in Coq. Instead, you seem to be wondering about why some definitions in Coq contain proof scripts.
One interesting feature of Coq is that the language that one uses for writing proofs and programs is actually the same. This language is known as Gallina, which is the programming language people work with when using Coq. When you write something like fun x => x + 5, that is a program in Gallina.
When doing proofs, however, people usually use another language, called Ltac. This is the language that appears in your exp example. This could lead you to believe that proofs in Coq are represented in a different language, but this is not true: what Ltac scripts do is to actually build proof terms in Gallina. You can see that by using the Print command, e.g.
Print exp.
The reason for having a separate language for writing proofs, even if proofs and programs are written in the same language, is that Gallina is a bit hard to use directly when writing proofs. Try using the Print command directly over a complicated theorem to see how hard that can be.
Now, even though Ltac is mostly meant for writing proofs, nothing forbids you from using it to write normal programs, since the end product is the same: a Gallina term. Usually, people prefer to use Gallina when writing programs because it is easier to read. However, people might resort to Ltac for writing programs when doing it directly in Gallina would be too cumbersome. I personally would prefer to use Gallina directly for writing functions such as exp in your example, although that's arguably a matter of taste.

Resources