How to write expansion rules in Isabelle? - isabelle

I encountered some problems while learning Isabelle's expansion rules. I hope someone can answer them, thank you.
I need to import Isabella's built-in HOL library to introduce some defined quantifiers. When I use Isabella2017 to open %ISABELLE_HOME%\src\HOL\HOL.thy, the seventh line "theory HOL" will report an error "Cannot update finished theory"HOL.HOL" ".
Have you encountered the same problem? How did you introduce the built-in rules in the HOL library?

Related

Sledgehammer within Locale

I am a happy user of Isabelle/Isar and Sledgehammer, but am just now trying to also use locales, as in my use case there are just overwhelming arguments for it.
I am using the Isabelle/December 2021 distribution, but most of the time when I am trying to use sledgehammer within a locale context, I will get a message like this:
"cvc4": Prover error:
exception TERM raised (line 457 of "~~/src/HOL/Tools/SMT/smt_translate.ML"): bad SMT term
It is the same message for other provers as well. Is this something that is a well-known problem? Without using locales I had such a problem only come up when my theory name was confused with some HOL theory name, and renaming my theory was a workaround. Is there something similar at play here? Is there an easy fix? Because I use sledgehammer a lot, so not being able to use it within a locale would be a severe blow against using locales.
For the sake of completeness, a summary of the discussion on the Isabelle mailing-list.
There is an issue in the translation from Isabelle to SMT Lib (the language for SMT solvers): lets inside lets are not translated correctly. This should be fixed in the next Isabelle release.
The work-around in the mean time is to avoid let inside let.

No Global Contract available for procedure / function

I've got a procedure within a SPARK module that calls the standard Ada-Text_IO.Put_Line.
During proving I get the following warning warning: no Global contract available for "Put_Line".
I do already know how to add the respective data dependency contract to procedures and functions written by myself but how do I add them to a procedures / functions written by others where I can't edit the source files?
I looked through sections 5.2 and 7.4 of the Adacore SPARK 2014 user's guide but didn't found an example with a solution to my problem.
This means that the analyzer cannot "see" whether global variables might be affected when this function is called. It therefore assumes this call is not modifying anything (otherwise all other proofs could be refuted immediately). This is likely a valid assumption for your specific example, but it might not be valid on an embedded system, where a custom implementation of Put_Line might do anything.
There are two ways to convey the missing information:
verifier can examine the source code of the function. Then it can try to generate global contracts itself.
global contracts are specified explicitly, see RM 6.1.4 (http://docs.adacore.com/spark2014-docs/html/lrm/subprograms.html#global-aspects)
In this case, the procedure you are calling is part of the run-time system (RTS), and therefore the source is not visible, and you probably cannot/should not change it.
What to do in practice?
Suppressing warnings is almost never a good idea, especially not when you are working on something safety-critical. Usually the code has to be changed until the warning goes away, or some justification process has to start.
If you are serious about the analysis results, I recommend to not use such subprograms. If you really need output there, either write your own procedure that replaces the RTS subprogram, or ensure that the subprogram really has no side effects. This is further backed up by what Frédéric has linked: Even if the callee has no side effects, you don't know whether it raises an exception for specific inputs (e.g., very long strings).
If you are not so serious about the results, then you can consider this specific one as a warning that you could live with.
Wrapper packages for use in development of SPARK applications may be found here:
https://github.com/joakim-strandberg/aida_2012
I think you just can't add Spark contracts on code you don't own, especially code from the Ada standard.
About Text_Io, I found something that may be valuable to you in the reference manual.
EDIT
Another solution compared to what Martin said, according to "Building high integrity applications with Spark" book, is to create a wrapper package.
As Spark requires you to deal with Spark packages but allows you to depend on a Spark spec with an Ada body, the solution is to build a Spark package wrapping your Ada.Text_io calls.
It might be tedious as you will have to wrap possible exceptions, possibly define specific types and so on but this way, you'll be able to discharge VCs on your full Spark package.

Error message in Isabelle/HOL

When applying the wrong tactic or the wrong deduction rule, the error message is usually too general:
Failed to apply initial proof method⌂
I am using Isabelle to teach natural deduction. When Isabelle complains, some students change the rule/tactic arbitrary without reflecting on the possible causes of the error. A more detailed error message could be part of the learning process of Isabelle, I think.
How to make those error messages student friendly? Does that require editing the source code or can it be managed by defining more expressive tactics of natural deduction?
Tactics in Isabelle can be thought of as chainable non-deterministic transformations of the goal state. That means that the question of what specifically caused a tactic to fail is difficult to answer in general, and there is no mechanism to track such information in Isabelle's tactic system. However, one could relatively easily modify existing tactics such that they can optionally output some tracing information.
However, I have no idea what this information should be. There are simple tactics such as rule where the reason why applying it fails is always that the rule that it is given cannot be unified with the goal (and possibly chained facts), and there are similarly simple tactics like intro, drule, frule, erule, and elim. Such unification-related problems can be debugged quite well sometimes using declare [[unify_trace_failure]], which prints some tracing information every time a unification fails.
With simp and auto, the situation is much less clear because of how many different things these methods can do. Essentially, when the proof method could not be applied at all, it means that ‘none of the things that simp and auto can do worked for this goal’. For simp, this includes simplification, splitting, linear arithmetic, and probably a lot more things that I forgot. For auto, it additionally includes classical reasoning with a certain search depth. One cannot really say easily what specific thing went wrong when these methods fail.
Some specialised tactics do print more specific error messages if something goes wrong, e.g. sat and smt sometimes print a special error message when they have found a counterexample to the goal, but I cannot even imagine what more helpful output for something like simp or auto would look like. If you have an idea, please do tell me.
I think this problem cannot really be solved with error messages; one must simply get to know the system and the tactics one uses better and understand what they do and when they fail. Perhaps it would be good to have a kind of catalogue of commonly-used tactics that mentions these things.

Prerequisite knowledge for understanding definition of standard ml

could anyone tell me what's the background theories in The Definition of Standard ML, found very interesting and beautiful, i did learn some sml a little, but i want more while don't know how to start (to understand TDSML)
3x in advance
For the old version of the Definition (SML'90) there actually was a separate book called "Commentary on Standard ML", which explained how to interpret the Definition. Both the SML'90 Definition and the Commentary are long out of print, but fortunately, are available as free PDFs.
The SML'90 Definition had some differences to SML'97, in particular regarding the module system. Overall, it was more complicated. But much of the Commentary should still apply, and if you have both versions side by side, it shouldn't be hard to figure out what's still relevant.
This is off the top of my head: In order to understand the methods employed in The Definition of Standard ML, you should some basic understanding of:
set theory
functions
first-order logic
type theory
Additionally, you should be able to read and understand inference rules of which the book makes extensive use.

Static code analysis tool for Common Lisp?

I'm busy learning Common Lisp, & I'm looking for a static code analysis tool that will help me develop better style & avoid falling into common traps.
I've found Lisp Critic and I think it looks good, but I was hoping that someone may be able to recommend some other tools, and / or share their experiences with them.
Given the dynamic nature of Lisp, static analysis is everything from tough to impossible, depending on the type of source code.
For some purposes I would recommend using the SBCL compiler. Check out its manual for what features it provides. One feature is some form of type inference. It provides also a lot of standard warnings for things like undeclared variables, type problems, calling functions with the wrong number of args, using undefined functions, violating the ANSI CL standard in various ways and more.
The best way to learn about good style is to read a lot of code and ask for others to review your code. This isn't something that's specific to Common Lisp.
I think that one gray tool is use lisp-critic, you can get some information
here:
http://quickdocs.org/lisp-critic/
and a remake that was done by #Xach
http://xach.com/lisp/

Resources