How to generate html version of Isabelle theory - isabelle

I have an Isabelle theory file, called John.thy. I would like to show it to my friend, but my friend doesn't have Isabelle, and the raw .thy files aren't very easy to read. I have seen some web-pages in the Isabelle library (like this one: http://isabelle.in.tum.de/library/HOL/Finite_Set.html) that have pretty syntax highlighting, and I would like my theory to look like that.
So how can I make John.html? I have looked at the documentation and it all looks rather scary and difficult, what with all the build files and make files and the like. Could some kind soul explain the simplest way to do this?

First the answer to your question (see also the Isabelle System Manual, Section 3.2 - System build options):
To generate HTML for your John.thy, create a file named ROOT, in the same directory as John.thy, with the following contents
session John = "HOL" +
theories
John
and then, staying in that same directory, invoke
isabelle build -d . -o browser_info -v John
where
-d . specifies that the current directory should be searched for sessions (which are specified in a ROOT file)
-o browser_info is the essential flag to generate HTML (a.k.a. browser info), and
-v (the verbose flag) is useful to see in which directory the result is put
The above invocation will output something similar to
Started at Thu Jul 25 09:38:20 JST 2013 [...]
[...]
Session Pure
Session HOL (main)
Session John
Running John ...
John: theory John
[...]
Browser info at /home/username/.isabelle/Isabelle2013/browser_info/HOL/John
[...]
(where [...] indicates omitted output). So here you see which directory you have to consult to obtain the HTML files.
Having said this, for at least the following reasons I personally prefer PDF over HTML:
with -o browser_info you get a bunch of files in a directory (instead of just a single self-contained file when using -o document=pdf)
not all Isabelle symbols are nicely rendered in HTML (whereas you have full control over symbols when generating PDFs)
Note: If you're using the Isabelle application for Mac OS, you may need to replace isabelle above with /Applications/Isabelle2013.app/Isabelle/bin/isabelle or add /Applications/Isabelle2013.app/Isabelle/bin/ to your PATH.

Related

What does $$[QT_HOST_DATA/get] do in a Qt Feature configuration (.prf) file?

Where is the following syntax used in a feature configuration (.prf) file? defined:
$$[QT_HOST_DATA/get]
I know $$[ ... ] is to access QMake properties as explained in the Qt doc, but where is the /get part of the notation in $$[QT_HOST_DATA/get] clarified? And what does it precisely do?
Also, inside a Qt .conf file, what is the difference between include (for other .conf files) and load() (for .prf files)?
If include(some.conf) merely consists in the contents of some.conf to be literally pasted into the including .conf file, what does load() do exactly?
I have found no info about the structure of .prf files.
https://doc.qt.io/qt-5/qmake-advanced-usage.html says that you can create .prf files, but says nothing about how these files are processed or should be structured?
Thanks for any clarifications you can provide!
where is the /get part of the notation in $$[QT_HOST_DATA/get] clarified? And what does it precisely do?
Nowhere, except qmake source code. It looks like all qmake properties may have upto four special "subproperies": xxx/dev xxx/src xxx/raw xxx/get. However, what are they used for is a mystery. Executing qmake -query QT_HOST_DATA/get produces (on my machine) just the same value as plain $$[QT_HOST_DATA].
I have found no info about the structure of .prf files.
Basically, .prf is just "system include file". There are two points, though:
All .prf files reside in a known location(s) pointed by QMAKEFEATURES variable.
BTW. QMAKEFEATURES is a sort of "protected variable". I managed to change it only with the help of (another undocumented) cache() function:
QMAKEFEATURES *= mydir # '*=' because of 3 passes under Windows
# 'transient' prevents creation file on disk
# only 'super' seems to work OK; no idea what's wrong with 'stash' or 'cache'
cache(QMAKEFEATURES, set transient super)
# now I can load .prf from <mydir> too...
Prf can be implicitly loaded by mentioning it in CONFIG variable. For example, CONFIG += qt (which is the default, btw.) results in include of <SomePrefix>/share/qt5/mkspecs/features/qt.prf Note that this takes place after the whole .pro was processed, so .prf file can be used to post-process user options.
what does load() do exactly?
It's just the version of include() designed specially for .prf. All it does, it simply includes .prf file. But, unlike CONFIG += xxx, it does this immediately, and, unlike plain include(), you shouldn't specify path and extension.

OCaml: How can I get the path to the *current module* / my project's directory?

I'm new to OCaml, but I'm trying to figure out the equivalent of __filename, __dirname from Node. That is, I need to build a path relative to the file containing the code in question.
For reference, I'm working through Ghuloum's IACC: http://ell.io/tt$ocameel
I'm building my first compiler, and I have an utterly-simplistic ‘runtime’ file (in C — temporarily) adjacent to the compiler's source-code. I need to be able to pass the path to this file, as an argument (or a pre-compiled version, I suppose) to gcc or my linker, to have it linked against my compiler's output when I invoke the linker/assembler tooling.
(This may be a stupid question — I'm at a bit of an unknown-unknown here, “how does a compiler get the runtime to the linker”, or something like that. Any commentary about idiomatic solutions to this is welcome, even if it's not a direct answer to the above question!)
If you're running the source file directly via ocaml myfile.ml, Sys.argv.(0) will give you the path to the source file and you can use Filename.dirname to get the directory from that.
If you first compile the source file into an executable and then run the executable, Sys.argv.(0) will give you the name of the executable. In that scenario it's impossible to get the location of the source code (especially if you consider that the person running the executable might not even have the source code on their system).
If you set up your project structure, so that your sources live in src/, your compiled binary in bin/ and the compiled stdlib in lib/, you could just use Filename.dirname Sys.argv.(0) ^ "../lib" as the library path for gcc. This will work whether you run ocaml src/mycompiler.ml, bin/mycompiler or just mycompiler after installing everything to /usr/ or /usr/local/.

Stop Python3 creating module cache in system directory

In Question 2918898, users discussed how to avoid caching because
modules were changing, and solutions focused on reloading. My question is
somewhat different; I want to avoid caching in the first place.
My application runs on Un*x and lives in /usr/local. It imports a
module with some shared code used by this application and another.
It's normally run as an ordinary user, and Python doesn't cache the
module in that case, because it doesn't have write permission for that
system directory. All good so far.
However, I sometimes need to run the application as superuser, and then
it does have write permission and it does cache it, leaving unsightly
footprints in a system directory. Do not want.
So ... any way to tell CPython 3.2 (or later, I'm willing to upgrade)
not to cache the module? Or some other way to solve the problem?
Changing the directory permissions doesn't work; root can still write,
root is all-powerful.
I looked through PEP 3147 but didn't see a way to prevent caching.
I don't recall any way to import code other than import. I suppose I
could read a simple text file and exec it, but that seems inelegant
and bug-prone.
The run-as-root is accomplished by calling the program with sudo in a
shell script, and I can have the shell script delete the cache after the
run, but I'm hoping for something more elegant that doesn't change the
directory's last-modified timestamp.
Implemented solution, based on Wander Nauta's answer:
Since I run the executable as a plain filename, not as python executablename, I went with the environment variable. First, the
sudoers file needs to be changed to allow setting environment
variables:
tom ALL=(ALL) SETENV: NOPASSWD: /usr/local/bkup/bin/mkbkup
Then, the invocation needs to include the variable:
/usr/bin/sudo PYTHONDONTWRITEBYTECODE=true /usr/local/bkup/bin/mkbkup "$#"
You can start python with the -B command-line flag to prevent it from writing cached bytecode.
$ ls
bar.py foo.py
$ cat foo.py
import bar
$ python -B foo.py; ls
bar.py foo.py
$ python foo.py; ls
bar.py foo.py __pycache__
Setting the PYTHONDONTWRITEBYTECODE environment variable to a non-empty string or the sys.dont_write_bytecode to True will have the same effect.
Of course, I'd say that the benefits in this case (faster loading times for your app, for free) vastly outweigh the perceived unsightliness you were talking about - but if you really want to disable caching, here's how.
Source: man python

Setup Unix Environment from Current Directory

I have a SunOS system, which is what complicates this situation from being simple.
I have some scripts that run from different paths (unavoidable) and the system has a path structure that has the "System Environment" in the path, which I can then extract from the path. I have a simple script which is called before or sourced from every other script to get the Environment and set several other common variables. The problem is, now that there are 3 different areas that may be calling this script, it doesn't properly extract the Environment from the path.
Here are simple examples of the 3 paths that might exist:
/dir1/dir2/${ENV}/bin/script1.ksh
/dir1/dir2/${ENV}/services/service_name/script2.ksh
/dir1/dir2/${ENV}/services/service_name/log/script3.ksh
I'd like to have 1 script, that would be able to get ${ENV}, not matter which one of the paths was provided as opposed to my current strategy of 3 separate ones.
Here is how I currently get the first ${ENV}:
#!/bin/ksh
export BASE_DIR=${0%/*/*}
export ENV=${BASE_DIR##*/}
2nd Script:
#!/bin/ksh
export CURR_DIR=$( cd -- "$(dirname -- "$(command -v -- "$0")")" && pwd)
export BASE_DIR=${CURR_DIR%/*/*}
export ENV=${BASE_DIR##*/}
As I stated, this is a SunOS system, so it has an old limited version of KSH. No set -A or substitution.
Any ideas on the best strategy to limit my repetitiveness of scripts?
Thanks.
It looks from your example that your ${ENV} directory is a fixed depth from root, in which case you can easily get the name of the directory by starting from the other end;
export ENV=`pwd | sed -e "s%\(/dir1/dir2/\)\([^/]*\).*%\2%"`
I'm using '%' so I can match '/' without escaping. Without knowing specifics about what version of SunOS/Solaris you're using I can't be certain how compliant your sed is but Bruce Barnett includes it in his tutorials which are very closely aligned with late SunOS and early Solaris versions.
If your scripts are all called by the same user, then you might want to include the above in that user's .profile, then the ENV variable will be accessible to all scripts owned/executed by that user.
UPDATE: Lee E. McMahon's "SED -- A Non-Interactive Text Editor" - written in 1978 - includes pattern grouping using escaped parentheses, so it should work for you on SunOS. :)

What is the Unix way for a console script to use config files?

Let's imagine we have some script 'm12' (I've just invented this name) that runs
on Linux computers. If it is situated in your $PATH, you can easily run it
from the console like this:
m12
It will work with the default parameters. But you can customize the work of
this script by running it something like:
m12 --enable_feature --select=3
It is great and it will work. But I want to create a config file ~/.m12rc so I
will not need to specify --enable_feature --select=3 every time I run it.
It can be easily done.
The difficult part is starting here.
So, I have ~/.m12rc config file, but I what to start m12 without parameters that
are stored in that config file. What is the Unix way to do this? Should I run
script like this:
m12 --ignore_config
or there is better solution?
Next. Let's imagine I have a config file ~/.m12rc and I want some parameters from that
file, but want to change them a bit. How should I run the script and how the
script should work?
And the last question. Is it a good idea for script to first look for .m12rc
in the current directory, then in ~/ and then in /etc?
I'm asking all these questions because I what to implement config files in my
small script and I want to make the correct decisions about the design.
The book 'The Art of Unix Programming' by E S Raymond discusses such issues.
You can override the config file with --config-file=/dev/null.
You would normally use the order:
System-wide configuration (/etc/m12/m12rc, or just /etc/m12).
User's personal configuration (~/.m12rc)
Local directory configuration (./.m12rc)
Command-line options
with each later-listed item overriding earlier listed items. You should be able to specify the configuration file to read on the command line; arguably, that should be given precedence over other options. Think about --no-system-config or --no-user-config or --no-local-config. Many scripts do not warrant a system config file. Most scripts I've developed would not use both local config and user config. But that's the way my mind works.
The way I package standard options is to have a script in $HOME/bin (say m12a) that does it for me:
#!/bin/sh
exec m12 --enable_feature --select=3 "$#"
If I want those options, I run m12a. If I want some other options, I run raw m12 with the requisite options. I have multiple hundreds of files in my personal bin directory (about 500 on my main machine, a Mac; some of those are executables, but many are scripts).
Let me share my experience. I normally source config file at the beginning of the script. In the config file I also handle all the parameter switches:
DEFAULT_USER=blabla
while getopts ":u" do
case $opt in
u)
export APP_USER=$OPTARG
;;
esac
done
export APP_USER=${APP_USER-$DEFAULT_USER}
Then within the script I just use variables, this let me to have number of script having same input parameters.
In your case I imaging you would move "getopts" section to script and after it source the config file (if there was no switch to skip sourcing).
You should not put yours script config file to etc, it will require root privilidge to do that, and you simple can live with config file in home.
If you would like anyway to put your script for sharing with other users, it should go to /usr/share...
Another solution use thor (ruby gem), its way simpler to handle input parameter, avoiding work to get same result in bash e.g. getopts support only single letter switches.

Resources