Program extraction using native integers/words (not bignums) from Isabelle theory - isabelle

This question comes in a context where Isabelle is used with formal software development in mind more than with pure maths theorization in mind (and from a standalone developer's context).
Seems at best, SML programs generated from an Isabelle theory, use SML's IntInf.int, not the native integer type, which is Int.int; even if Code_Target_Int, Code_Binary_Nat or Code_Target_Nat is used. Investigation of these theories sources seems to confirm it's all it can do. Native platform integers may be required for multiple reasons, including efficiency and the case the SML imperative program is to be optionally translated into an imperative language subset (ex. C or Ada), which is relevant when the theory relies on the Imperative_HOL theory. The codegen.pdf document which comes with the Isabelle distribution, did not help with it, except in suggesting the first of the options below.
Options may be:
Not using Isabelle's int and nat and re‑create a new numeric type from scratch, then use the code_printing commands (with its type_constructor and constant) to give it the native platform representation and operations (implies inclusion of range limitations in some way in the theory) : must be tedious, although unlikely error‑prone I hope, due to the formal environment. Note this does seems feasible with Isabelle's own int and nat… it makes code generation fails, and nothing tells which constants are missing in the code_printing command.
If the SML program is to be compiled directly (ex. with MLTon), tweak the SML environment with a replacement IntInf structure : may be unsafe or not feasible, and still requires to embed the range limitations in the theory, so the previous options may finally be better than this one.
Touch the generated program to change IntInf into Int : easy, but it is safe? (at least, IntInf implements the same signature as Int do, so may be it's safe). As above, requires to specifies bounds in the theory in some way, it's OK with this.
Dive into Isabelle internals : surely unreasonable, even worse than the second option.
There exist a Word theory, but according to some readings, it's seems not suited for that purpose.
Are they other known options not listed here? Are they comments on the listed options?
If there is no ready‑to‑cook solutions (I feel there is no at the time), what hints or tracks would be best known? (ex. links to documents, mentions of concepts).
Update
Points #2 and #3 of the list, may be OK (if it really is) only if there is a single integer type. If the program use more than only one, it's not applicable.

Directly generating native words from Isabelle int would be unsound, because your formalisation would not take overflow into account where it exists in reality.
It looks like the AFP entry Native_Word does what you want, though:
http://afp.sourceforge.net/entries/Native_Word.shtml

Related

completely replace the inner syntax in isar?

I am interested in using Isar as a meta language for writing formal proofs about J, an executable math notation and programming language, and I'd like to be able to use J as the inner syntax.
J consists of a large number of primitives, and assigns (multiple!) meanings to every ASCII character, including single and double quotes.
Where can I find documentation or example code for implementing a completely new inner syntax? Or is this even possible? (I've been looking around in the src/ directory, but it's somewhat overwhelming and I'm not entirely sure what I'm looking for.)
Answer B: Building on HOL, with an Improvised J Syntax
Clarification is good, but I don't like to do the handshaking necessary to do it.
My first answer below was largely based on your phrase, "a completely new syntax", and I think it's half of an answer to a question like this:
Suppose, hypothetically, that I need syntax that's very close to the the syntax of J. What would that require, with regards to Isabelle/HOL?
My answer:
Most likely, I'd say you would have to undefine much of the syntax for the constants, functions, and type classes of Isabelle/HOL, which would require that you do extensive editing of the standard Isabelle/HOL distribution, to get it back working. And some syntax in Isabelle/HOL, you most likely wouldn't be able to take out.
Or, you would have to start fresh, with an import of Pure as a starting point. Please see my first answer below.
Just Syntax? Now we're back in normal user space
The customization of syntax in Isabelle/HOL makes us all a potential True Artiste.
There are advanced ways to tap into the power of defining syntax, such as parse_translation, with Isabelle/ML, but I don't use advanced methods. I use a few basic keywords to define the syntax: notation, no_notation, syntax, and translations, along with abbreviation, when either I want to rearrange the input arguments of a functions, or I don't want to mess up the notation for a standard HOL function.
notation, no_notation, the easy ones
I don't use no_notation a lot, but you need it in your arsenal. For an example, see Can I overload the notation for operators that are assigned to bool and list?.
The use of notation is easy, once you see a few examples.
For an infix operator, plus :: 'a => 'a => 'a, here are some examples:
notation plus (infixl "[+]" 65)
notation (input) plus (infixl "[+]" 65)
notation (output) plus (infixl "[+]" 65)
With that example, I entered into the realm of possibly messing up the notation for plus, which is an operator for a standard, HOL type class.
The line from above that won't mess up the output display is the line that uses (input).
For notation, to find examples, do greps in THY files or on the src/HOL folder, because there are too many variations to give you lots of examples here.
abbreviation, and not messing other things up
Suppose I want a really tight binding for the standard even predicate. I could do something like this:
notation (input) even ("even _" [1000] 1000)
notation (output) even ("even _" [1000] 999)
I say "could", because I don't know how that will mess up the standard function application of even, so I wouldn't want to do that.
Why the 999? It's just from trial and error, and from experience, where I know that this next line alone messes up declare[[show_brackets]]:
notation even ("even _" [1000] 1000)
That's the way it is with defining syntax. It's a combination of trial and error, finding examples for use as templates, experience, and noticing later on that you messed something up.
I forget all the things that abbreviation helps me out with. An innovative use of abbreviation can keep you from having to use more complicated methods.
You could use it to rearrange arguments, for some notational purpose:
abbreviation list_foo :: "'a list => 'a => 'a list" where
"list_foo xs x == x # xs"
notation
list_foo ("_ +#+ _" [65, 65] 64)
That example is an example of several examples. I was just trying to make a quick example, and I had something like (infixl "_ +#+ _" [65, 65] 64). There's not a lot of variation in how I define notation, so I had to find an example in Set.thy to show me that I needed to take out the infixl, since I wanted to use [65, 65] 64 as a variation on how you can define syntax.
Did I get the priorities right with [65, 65] 64? I have no idea. It's just for a quick example.
syntax and translations
You have to have it in your arsenal, but it will cause you a lot of time-consuming grief. Do greps and find examples. Try this and that. When you stumble on something that works, and you think you need it, then save it somewhere. If you don't, and you make a small change that breaks what you had, and you didn't save what you had that worked, you will regret having to spend a lot of time trying to get back to what worked.
The Isar Reference Manual, isar-ref.pdf#175 has a little info. Also, you can look up the use of notation in that PDF.
The unasked for part of Answer Part B
In your comment, you say this:
I already do have a "logic of programming" that I want to implement (cs.utoronto.ca/~hehner/FMSD) and J is a language that's especially well suited for formal proofs. I'm just trying to figure out how to re-use Isabelle's logic infrastructure rather than writing my own.
A short, unsafe answer, from anybody, for a question like this, even hedged, is like:
You most likely can't do, in Isabelle/HOL, what you're wanting to do with J.
A safer, short answer is like this:
Most likely, you will have major problems trying to do what you're wanting to do with J in Isabelle/HOL.
Those are short, quick answers. How can an answer to a question like this be short, if it actually tries to address the why?
It ends up being a "given everything I know" answer, because many times it's not that it can't be done, but that the right group of people, given a long enough period of time, given the right technology, haven't yet done it.
My headings below become my points. I try to blow through the rest fairly quickly, but still document things.
By you using HOL as your logic, my original answer still applies if slightly modified
The development of Isabelle/HOL into what it is today, starting with Robin Milner, is what I categorize as rocket science logic.
From all of my searches, and from all of my listening, it appears that there's still a lot of rocket science logic that needs to be developed before proof assistants can be used to formally verify any ole program written in any ole imperative programming language.
You have a logic, HOL, but you're implying that you're going to implement something similar to what a whole of lot people want, and have wanted for a long time.
What's below is to support what I say here.
J as a language well suited for formal proofs
There would be the traditional form of algorithm analysis, like Introduction to Algorithms, 3rd, by Cormen & Leiserson.
I'll call program proofs in Isabelle/HOL mechanized proofs and formally verified programs. I also consider certain pencil-and-paper proofs to be formal.
In traditional, non-mechanized proofs, then, yes, I guess J is a language well suited for formal proofs, which I say because you've told me it is. But then, big, popular programming languages, in particular C++ and Java, have textbooks written about them on the subject of formal, algorithm analysis. So, it must be, with traditional, non-mechanized proofs, they can also be reasoned about.
J in the context of mechanized proofs
No, it's not a language well-suited for formal, mechanized proofs. It uses (a better word than uses?) imperative programming, and it appears to be object oriented.
Largely, I'm just repeating things I've read others say. I'll start making statements as my personal conclusions. That will make things shorter.
Functional programming languages are good for formal proofs. Traditional programming involves mutating variables, and supposedly that bumps way up the difficulty of proofs.
I was searching for a statement about object oriented languages on the mailing list, but if you listen, people say they've done this or that special thing, but it's never something like, "Here's a complete development and formalization that easily allows you to verify programs written in general-purpose programming language X".
Formal proof, among other things, is about a set of axioms being enforced, where the selection of the axioms is the result of rocket science logic over a number of years, because the norm is not for a seemingly desirable set of axioms to be logically consistent.
For formal verification, you don't get to bypass the enforcement of the axioms. In textbooks, number constants just show up and get used, and they reason about them.
In formal proof, number constants, in particular the real numbers, are difficult to use. Ask yourself, "What is a natural number, an integer, a rational number, and a real number constant in Isabelle/HOL?" Now, if you answered that question, then ask yourself, "How do I do proofs involving natural numbers, integers, rational numbers, and real numbers in Isabelle/HOL?"
Now, contrast those questions with the fact that number constants just show up in most textbooks, and get used. That's not the way it works in formal proof. There's no magical appearance of number systems and constants. There can be a little magic in the automation of proofs involving numbers, but I'm pretty sure I'm doomed if my plan ever becomes dependent on magic like that.
L4.verified (and AutoCorres)
There's the L4.verified project by NICTA. (Update: And at sel4.systems, with co-credit given to General Dynamics C4 Systems. A big-name company like GD being involved supports my thesis that formal verification of imperative programming languages is something that's been highly desired for a long time.)
A quote:
We chose an operating system kernel to demonstrate this: seL4. It is a small, 3rd generation high-performance microkernel with about 8,700 lines of C code.
Why so selective? Why not any ole C program? I guess verifying C is hard. NICTA, they're not a small, inexperienced, unfunded group.
(Update: There's also the related AutoCorres project at NICTA, with its Quickstart Guide PDF. The release version is at v1.0, which was released on 2014-12-16. That must mean that they achieved the primary goal of whatever it was they were supposed to achieve. When I read their overview on the AutoCorres web page, I take it as supporting what I'm saying. It appears to me that they engage in some rocket science logic to get the C into another form, at least a little rocket science logic. I'm no authority on what constitutes rocket science logic. I think I'm safe in saying for sure that they're using PhD level logic to get their results.)
The book Practical Theory of Programming: where did number constants come from?
I downloaded the PDF for the book A Practical Theory of Programming.
One of the first things I started looking for in that book is "what are numbers and how are they formalized".
Number systems, we take them for granted, but they represent all that which is difficult about formal proof.
In a book, when number constants just show up, and just start getting used, it most likely means that there's no real formalization of the corresponding number systems. Why? Building up number system constants is extraordinarily involved.
If number constants weren't formally built up, there's no real formal proof there. If they do get built up formally, life is still not easy.
Here's something about the difficulty of working with real numbes: Larry Paulson's talk at NASA in 2014.
The book Practical Theory of Programming: while loops
The other thing I immediately started looking for was an example of a traditional loop, where you repeatedly modify a variable.
It starts at Section 5.2.0 While Loop, aPToP.pdf#76. The example is on the following page, Exercise 265:
while ¬ x = y = 0 do
if y > 0 then y := y - 1
else (x := x - 1. var· y := n)
There you go, a classic example of using mutable state (where I did searches on "mutable state" to actually see if I used the phrase correctly, with no clear conclusion).
You have a variable, and you're changing it's contents. That, so I hear, or so I conclude, represents why you're doomed when it comes to wanting to verify programs you write in J.
It's not that I want you to be doomed. When you put up on GitHub "The Formalization of the J Programming Language in Isabelle/HOL - with Many Demonstrations Showing the Ease with which J Programs Can Be Formally Verified", I'll be there.
Coq. What's out there for imperative programming?
I have this hunch that Coq would be better, if my main application was programming.
I keep the requirements minimal, by doing a Google search on coq imperative.
The first link is Ynot.
Does this support your idea that you should be able to take J and implement it in Isabelle/HOL?
Not to me. It supports my idea that if someone, who knows a lot, and gets to make a design decision about the language they're going to use, then they can do formal verification of imperative programs in a proof assistant.
You, on the other hand, first pick the programming language, and then are now going to mold a proof assistant around it.
My interest about J, on a scale from 0 to 10
At this point, my interest in J is basically 0, on a scale from 0 to 10.
Suppose, though, you put up a web site, "How It's Going with That J Thing", and I subscribe to it with a RSS reader.
It's not that I don't want you to formally verify J programs in Isabelle/HOL, it's that I don't think you'll be able to do it, and so there's no reason for me to care about it, since I don't need it.
However, if I saw new activity in my RSS reader for your site, and it told me you succeeded, and you put your code up on GitHub, then my interest goes to 10. Someone doing formalization for a full-blown programming language in Isabelle/HOL, where proofs can be decently implemented, like for functional programming, and not just for a small subset of the language, that's something to be interested in.
Original Answer
Four days have passed, it's the holiday period, and the experts might not show up, so I give you my answer.
I try to get to the short answer as quick as possible, but I say a few things first (actually, a lot of things), to try and give my quick answer some support.
I don't think you're using the Isabelle vocabulary quite right ("inner syntax"), but I take two phrases of yours, with my bold emphasis added:
I am interested in using Isar as a meta language for writing formal proofs about J...
Where can I find documentation or example code for implementing a completely new inner syntax?
I'm not one to want to spend time clarifying, so here's what I take as your requirements, where I add a few details, from having listened to the experts, and figuring out a few things for myself, based on what they've said:
You want a logic which can be used to reason about programs you've written in J, where you use the minimal logic of Isabelle/Pure as your starting point (because you need the complete syntax of J, and want to start fresh).
You want to define syntax, using Isabelle/Isar, which implements (or models?) the complete syntax and functionality of J. (You didn't say that you only wanted to reason about a subset of the syntax and functionality of J.)
Unfortunately, my short answer is not completely set up.
To try to get you to realize what you're asking for, I now quote from the main J web page, where the emphasis is mine:
J is a modern, high-level, general-purpose, high-performance programming language.
I rephrase now general-purpose as full-blown, like C, like Pascal, like many high-level, general-purpose programming languages, and I remind you that you want two things:
A logic in Isabelle, which surely has to be comparable in sophistication, in features, and in power to the logic of Isabelle/HOL.
The syntax and use (or modeling?) of a full-blown programming language, J, in Isabelle, starting with Isabelle/Pure, where your implementation surely has to be
a little comparable in sophistication and power to the code generator of Isabelle/HOL, which can export code for 5 programming languages, SML, OCaml, Haskell, Scala, and Eval (Isabelle/ML),
and comparable in power to the logic engine of Isabelle/HOL, which implements (or models?) high-level, functional programming constructs such as definition, primrec, datatype, and fun, which let a person define functions and new datatypes, along with the standard library of Isabelle/HOL types, such as pairs, lists, etc.
Now, what I claim, as my personal conclusion, is that what you want to implement is at least as difficult to implement as Isabelle/HOL, which is the result of a large number of people, done over many years.
Please consider what Peter Lammich had to say on the Isabelle user's list in I need a fixed mutable array:
HOL itself does not support mutable arrays.
However, there is Imperative_HOL, which has a heap monad supporting
mutable arrays.
Then there is afp/Collections/Lib/Diff_Array, which provides an
implementation of arrays that behaves purely functional, but is
efficient if only the last version is accessed.
However, if you are not after efficient executability, but only
looking for an abstract model of a memory, it makes no sense using the
above types, as the efficiency comes at the price of additional
formalization overhead.
My point from the quote is that Isabelle/HOL, though powerful enough to be one of the leading competitors as a proof assistant, doesn't implement standard arrays in the main part of its logic, which you get when you import Complex_Main.
Let (L, P) be a pair, where L is the logic and P is the programming language. I want to talk about two pairs, (Isabelle/HOL, Haskell), and what you want, (x, J), where x is your yet determined logic.
There is a very close relationship between Isabelle/HOL and Haskell. For example, the type classes of Isabelle/HOL are advertised as Haskell-like type classes, and also, that Haskell is a pure functional programming language, and Isabelle/HOL is pure. I don't want to go further, because as a non-expert, I'm sure to say something that's not right.
The point I want to make is this:
Haskell is a full-blown programming language,
Isabelle/HOL is a powerful logic,
Haskell is one of the programming languages that can be exported from Isabelle/HOL,
but yet Isabelle/HOL doesn't implement (or model?) much of Haskell.
I don't want to talk as some authority. But from listening, my conclusion is: it's that logic thing. Apparently, it's much easier to implement programming languages than to develop logic to reason about programs.
The short answer is that, in my opinion, the example code that you're looking for is Isabelle/HOL, because though there are some examples in Isabelle2014/src of other logics, what I've quoted you as saying and wanting, and what I'm saying you're saying and wanting, is that you want and need a full blown logic, like Isabelle/HOL.
From here, I try to throw out a few quick ideas.
I like that car, but what I really want is liquid nitrogen for fuel
That's my joke.
You're talking to a senior engineer, who has worked in the industry for years, and has learned the expert knowledge that has accumulated in the automotive industry, over years and years, and you say, "I like that idea of a car, but my idea is to use a nitrogen fuel cell instead of gasoline. How would I do that?"
More logics in the Isabelle2014/src folder
The links under Theory libraries for Isabelle2014, on the distribution web page, match up with folders in the Isabelle2014/src folder.
In the src folder, you will see the folders CCL, Cube, CTT, and others.
I'm sure those are good for learning, though probably still difficult to understand, but those aren't what you've described. You're asking for a full blown implementation of something that models a programming language.
If the use of C/C++ is so big, then why isn't there something like you want for C/C++?
I guess there is, at least, sort of, for C. I found vcc.codeplex.com/. Again, I'm not an expert, so I don't want to be saying exactly what is out there, and what isn't.
My point here is that C and C++ have been around for a long time, and heavily used, and the link above shows that there are professionals which have, for a long time, been interested in verifying C programs, which makes a lot of sense.
But, after all these years, why isn't program verification an integral part of C/C++ programming?
From having listened to those here and there, and on the mailing list, and from listening to people like Martin Odersky, the Scala architect, they forever want to talk about mutable and immutable state, where traditional programming, like C, and I assume J, would be in the category of using mutable state, very much using it. Over time, I have heard a number of times that mutable state makes it difficult to reason about what a program does.
My point again is that it must be a lot easier to design programming languages, than to reason about programs.
Finally, a little source
If there had been some competition for this question, I might have been less verbose, though maybe not, though probably so, as in not even giving an answer.
My final point is a re-emphasis of points above. It pays to know a little history, and I start way before Church and Curry.
I know that Isabelle/HOL is the result of what started at Cambridge, with Robin Milner, the author of ML, then Mike Gordon of the HOL group, then Larry Paulson, the author of using Pure as minimal logic to define other logics, and then Tobias Nipkow teamed up with him to get HOL started as a logic in Isabelle, and then Makarius Wenzel put a higher-level syntax on it all, Isar (it's more than just syntactic sugar; it's fundamental to the feature of structured proofs), along with the PIDE frontend, and all along other people throughout the world have made numerous contributions, many from the big group at TUM, in Germany, but then there's CERN of Australia (update: CERN? that was no joke; I actually do know the difference between CERN and NICTA; the world, it's not an easy thing to talk about), and back to the European area, a certain Swiss establishment, ETH, and still more places spread around Germany and Austria, UIBK, and back over to England? Who did I leave out? Me, of course, and lots of others around the world.
The rambling point? It's that thing of you asking for something that embodies the expertise of an industry. It's not bad to ask for it. It's downright audacious, and I could be completely wrong in what I'm saying, and missed that folder in src, the HOWTO of Implementing Logic for General-Purpose Programming Languages, All in Ten Mostly Easy Steps, Send in Your $9.95 Now, or Euros if That's All You Got, You Do the Conversion, I Trust You, But Wait, There's More, Do a Change Directory to Isabelle2014/medicaldoctor and Learn How to Become a Brain Surgeon, Too.
That's another joke, I claim. Just a space filler, nothing much more.
Anyway, consider here lines 47 to 60 of HOL.thy:
setup {* Axclass.class_axiomatization (#{binding type}, []) *}
default_sort type
setup {* Object_Logic.add_base_sort #{sort type} *}
axiomatization where fun_arity: "OFCLASS('a ⇒ 'b, type_class)"
instance "fun" :: (type, type) type by (rule fun_arity)
axiomatization where itself_arity: "OFCLASS('a itself, type_class)"
instance itself :: (type) type by (rule itself_arity)
typedecl bool
judgment
Trueprop :: "bool => prop" ("(_)" 5)
Periodically, I've put in effort at understanding those few lines. For a long time, my starting point was typedecl bool, and I wasn't concerned with trying to understand what what was before that, other than that HOL.thy imports Pure.
Recently, in trying to figure out types and sorts in Isabelle, from having listened to the experts, I finally saw that this line is where we get something like x::'a::type:
setup {* Object_Logic.add_base_sort #{sort type} *}
Another point? I'm back to what I said earlier. Because you want full-blown, your example is Isabelle/HOL, but yet just the first 57 lines of HOL.thy aren't easy to understand. But if you don't start with HOL, where are you going to look? Well, if what you find ends up being easy, there's a good chance it's partly because hundreds of people, over many years, didn't put their effort into the best way to start things out.
Or, it could have just been the 3 people listed as authors, Nipkow, Wenzel, and Paulson. In any case, there's still years of experience and education behind what's in there, even though HOL.thy is not that long, only 2019 lines. Of course, to understand what's in HOL.thy, you have to at least have a vague understanding of what Pure is.
Take a look at the src/Cube folder. It's one of the example logics that I mentioned above.
There are only two files, Cube.thy and Example.thy. It should be easy enough, but then that's the problem, it's too easy. It's not going to reflect the sophistication of Isabelle/HOL.
Your problems aren't my problem. Isabelle/HOL is good for reasoning about mathematics, like its ability to abstract operators with type classes. And it's good for more, like defining functions using functional programming, to be exported for OCaml, Haskell, SML, Haskell, and Eval.
I'm just a beginner, that's all I am. If there's a better answer, then I hope it gets put forth by someone.
A few notes on the original question:
Outer syntax is the theory and proof language of Isar; to change it you define additional commands. You are subject to general types of theory content, like theory, local_theory, Proof.context, but these types are very flexible and can assimilate arbitrary ML data that is specific to your application.
Inner syntax is the type/term language of the logic, i.e. Pure for the framework and HOL for applications (or any other logic that you prefer, although HOL is so advanced today, that you should not ignore it without really good reasons). Ultimately you spell-out simple-typed lambda terms.
Both for outer and inner syntax you are subject to certain notions of tokens (identifiers, quoted strings etc.). Your language needs to conform to that, if it is meant to co-exist directly with the existing syntax framework.
It is nonetheless possible to embed totally different languages into outer and inner syntax of Isabelle, by using quotations. E.g. see the document preparation language that is based on LaTeX and is delimited by funny {* ... *} markers for verbatim text. More basic quotations use " ... " simular to ML string syntax. Inside the inner syntax, '' ... '' (double single quotes) do a similar job.
In Isabelle2014 there is a new syntactic device of text cartouches that makes this work a bit more smoothly. E.g. see the examples in Isabelle2014/src/HOL/ex/Cartouche_Examples.thy which explore a bit some possibilities.
Another current example from Isabelle2014 is the rail language inside Isabelle document source: it may serve as almost stand-alone example of a "domain-specific formal language" defined from scratch. E.g. see Isabelle2014/src/Doc/Isar_Ref/Document_Preparation.thy and look at the various uses of #{rail ...} -- the implementation of that is in Isabelle2014/src/Pure/Tools/rail.ML -- a file of finite size to be studied carefully to learn more.

What are the main differences between CLISP, ECL, and SBCL?

I want to do some simulations with ACT-R and I will need a Common Lisp implementation. I have three Common Lisp implementations available: (1) CLISP [1], (2) ECL [1], and (3) SBCL [1]. As you might have gathered from the links I have read a bit about all three of them on Wikipedia. But I would like the opinion of some experienced users. More specifically I would like to know:
(i) What are the main differences between the three implementations (e.g.: What are they best at? Is any of them used only for specific purposes and might therefore not be suited for specific tasks?)?
(ii) Is there an obvious choice either based on the fact that I will be using ACT-R or based on general reasons?
As this could be interpreted as a subjective question
I checked What topics can I ask about here and What types of questions should I avoid asking? and if I read correctly it should not qualify as forbidden fruit.
I wrote a moderately-sized application and ran it in SBCL, CCL, ECL, CLISP, ABCL, and LispWorks. For my application, SBCL is far and away the fastest, and it's got a pretty good debugger. It's a bit strict about some warnings--you may end up coding in a slightly more regimented way, or turn off one or more warnings.
I agree with Sylwester: If possible, write to the standard, and then you can run your code in any implementation. You'll figure out through testing which is best for your project.
Since SBCL compiles so agressively, once in a while the stacktrace in the debugger is less informative than I'd like. This can probably be controlled with parameters, but I just rerun the same code in one of the other implementations. ABCL has an informative stacktrace, for example, as I recall. (It's also very slow, but if you want real Common Lisp and Java interoperability, it's the only option.)
One of the nice things about Common Lisp is how many high-quality implementations there are, most of them free.
For informal use--e.g. to learn Common Lisp, CCL or CLISP may be a better choice than SBCL.
I have never tried compiling to C using ECL. It's possible that it would beat SBCL on speed for some applications. I have no idea.
CLISP and LispWorks will not handle arbitrarily long argument lists (unless that's been fixed in the last couple of years, but I doubt it). This turned out to be a problem with my application, but would not be a problem for most code.
Doesn't ACT-R come out of Carnegie Mellon? What do its authors use? My guess would be CMUCL or SBCL, which is derived from CMUCL. (I only tried CMUCL briefly. Its interpreter is very slow, but I assume that compiled code is very fast. I think that most people choose SBCL over CMUCL, however.)
(It's possible that this question belongs on Programmers.SE.)
In general, SBCL is the default choice among open-source Lisps. It is solid, well-supported, produces fast code, and provides many goodies beyond what the standard mandates (concurrency primitives, profiling, etc.) Another implementation with similar properties is CCL.
CLISP is more suitable if you're not an engineer, or you want to quickly show Lisp to someone non-engineer. It's a pretty basic implementation, but quick to get running and user-friendly. A Lisp-calculator :)
ECL's major selling point is that it's embeddable, i.e. it is rather easy to make it work inside some C application, like a web-server etc. It's a good choice for geeks, who want to explore solutions on the boundary of Lisp and the outside world. If you're not intersted in such use case I wouldn't recommend you to try it, especially since it is not actively supported, at the moment.
Their names, their bugs and their non standard additions (using them will lock you in)
I use CLISP as REPL and testing during dev and usually SBCL for production. ECL i've never used.
I recommend you test your code with more than one implementation.

F# and tuple output

Over at http://diditwith.net, I see that, in F#, it isn't strictly necessary to pass out parameters to a function that otherwise requires them. The language will auto-magically stuff the result and the output parameter into a tuple. (!)
Is this some kind of side effect (pardon the pun) of the general mechanics of the language, or a feature that was specifically articulated in the F# specification and deliberately programmed into the language?
It's an awesome feature, and if it was expressly put into F#, then I'm wondering what other nuggets of gold like this are lurking within the language, because I've pored over dozens of web pages and read through three books (by D. Syme, T. Petricek, and C. Smith) and I hadn't seen this particular trick mentioned at all.
EDIT: As Mr. Petricek has responded, below, he does mention the feature in at least two places in his book, Real-World Functional Programming. My bad.
This is not a side-effect of some other, more general, mechanism in the F# language.
It has been added specifically for this purpose. .NET libraries often return multiple values by adding out (or ref) parameters at the end of the method signature. In F#, returning multiple values is done by returning tuple, so it makes sense to turn the .NET style into the typical F# pattern.
I don't think F# does many similar tricks, especially when it comes to interoperability, but you can browse through some of the handy snippets here and here.
(I quickly checked and Real-World Functional Programming mentions the trick briefly on pages 88 and 111.)
This is a specific feature to make interop with .NET methods more pleasant - all trailing out parameters can instead be treated as part of the return value (but note that this only affects trailing out parameters, so a method with the C# signature like void f(out int i, int j) can't be called this way).
Arguably, out parameters are just a way to work around the lack of tuples in .NET 1.0, anyway. It seems likely that many methods that use them would be written differently if they targeted later versions of the framework (by using Nullable<_> types or tuples as return types).

Can someone tell me what Strong typing and weak typing means and which one is better?

Can someone tell me what Strong typing and weak typing means and which one is better?
That'll be the theory answers taken care of, but the practice side seems to have been neglected...
Strong-typing means that you can't use one type of variable where another is expected (or have restrictions to doing so). Weak-typing means you can mix different types. In PHP for example, you can mix numbers and strings and PHP won't complain because it is a weakly-typed language.
$message = "You are visitor number ".$count;
If it was strongly typed, you'd have to convert $count from an integer to a string, usually with either with casting:
$message = "you are visitor number ".(string)$count;
...or a function:
$message = "you are visitor number ".strval($count);
As for which is better, that's subjective. Advocates of strong-typing will tell you that it will help you to avoid some bugs and/or errors and help communicate the purpose of a variable etc. They'll also tell you that advocates of weak-typing will call strong-typing "unnecessary language fluff that is rendered pointless by common sense", or something similar. As a card-carrying member of the weak-typing group, I'd have to say that they've got my number... but I have theirs too, and I can put it in a string :)
"Strong typing" and its opposite "weak typing" are rather weak in meaning, partly since the notion of what is considered to be "strong" can vary depending on whom you ask. E.g. C has been been called both "strongly typed" and "weakly typed" by different authors, it really depends on what you compare it to.
Generally a type system should be considered stronger if it can express the same constraints as another and more. Quite often two type systems are not be comparable, though -- one might have features the other lacks and vice versa. Any discussion of relative strengths is then up to personal taste.
Having a stronger type system means that either the compiler or the runtime will report more errors, which is usually a good thing, although it might come at the cost of having to provide more type information manually, which might be considered effort not worthwhile. I would claim "strong typing" is generally better, but you have to look at the cost.
It's also important to realize that "strongly typed" is often incorrectly used instead of "statically typed" or even "manifest typed". "Statically typed" means that there are type checks at compile-time, "manifest typed" means that the types are declared explicitly. Manifest-typing is probably the best known way of making a type system stronger (think Java), but you can add strength by other means such as type-inference.
I would like to reiterate that weak typing is not the same as dynamic typing.
This is a rather well written article on the subject and I would definitely recommend giving it a read if you are unsure about the differences between strong, weak, static and dynamic type systems. It details the differences much better than can be expected in a short answer, and has some very enlightening examples.
http://en.wikipedia.org/wiki/Type_system
Strong typing is the most common type model in modern programming languages. Those languages have one simple feature - knowing about type values in run time. We can say that strong typed languages prevent mixing operations between two or more different kind of types. Here is an example in Java:
String foo = "Hello, world!";
Object obj = foo;
String bar = (String) obj;
Date baz = (Date) obj; // This line will throw an error
The previous example will work perfectly well until program hit the last line of code where the ClassCastException is going to be thrown because Java is strong typed programming language.
When we talk about weak typed languages, Perl is one of them. The following example shows how Perl doesn't have any problems with mixing two different types.
$a = 10;
$b = "a";
$c = $a . $b;
print $c; # returns 10a
I hope you find this useful,
Thanks.
This article is a great read: http://blogs.perl.org/users/ovid/2010/08/what-to-know-before-debating-type-systems.html Cleared up a lot of things for me when researching trying to answer a similar question, hope others find it useful too.
Strong and Weak Typing:
Probably the most common way type systems are classified is "strong"
or "weak." This is unfortunate, since these words have nearly no
meaning at all. It is, to a limited extent, possible to compare two
languages with very similar type systems, and designate one as having
the stronger of those two systems. Beyond that, the words mean nothing
at all.
Static and Dynamic Types
This is very nearly the only common classification of type systems
that has real meaning. As a matter of fact, it's significance is
frequently under-estimated [...] Dynamic and static type systems are
two completely different things, whose goals happen to partially
overlap.
A static type system is a mechanism by which a compiler examines
source code and assigns labels (called "types") to pieces of the
syntax, and then uses them to infer something about the program's
behavior. A dynamic type system is a mechanism by which a compiler
generates code to keep track of the sort of data (coincidentally, also
called its "type") used by the program. The use of the same word
"type" in each of these two systems is, of course, not really entirely
coincidental; yet it is best understood as having a sort of weak
historical significance. Great confusion results from trying to find a
world view in which "type" really means the same thing in both
systems. It doesn't.
Explicit/Implicit Types:
When these terms are used, they refer to the extent to which a
compiler will reason about the static types of parts of a program. All
programming languages have some form of reasoning about types. Some
have more than others. ML and Haskell have implicit types, in that no
(or very few, depending on the language and extensions in use) type
declarations are needed. Java and Ada have very explicit types, and
one is constantly declaring the types of things. All of the above have
(relatively, compared to C and C++, for example) strong static type
systems.
Strong/weak typing in a language is related to how easily you can do type conversions:
For example in Python:
str = 5 + 'a'
# would throw an error since it does not want to cast one type to the other implicitly.
Where as in C language:
int a = 5;
a = 5 + 'c';
/* is fine, because C treats 'c' as an integer in this case */
Thus Python is more strongly typed than C (from this perspective).
May be this can help you to understand Strong and Weak Typing.......
Strong typing: It checks the type of variables as soon as possible, usually at compile time. It prevents mixing operations between mismatched types.
A strong-typed programming language is one in which:
All variables (or data types) are known at compile time
There is strict enforcement of typing rules (a String can't be used
where an Integer would be expected)
All exceptions to typing rules results in a compile time error
Weak Typing: While weak typing is delaying checking the types of the system as late as possible, usually to run-time. In this you can mix types without an explicit conversion.
A "weak-typed" programming language is simply one which is not strong-typed.
which is preferred depends on what you want. for scripts and good stuff you will usually want weak typing, because you want to write as much less code as possible. in big programs, strong typing can reduce errors at compile time.
Weak typing means that you don't specify what type a variable is, and strong typing means you give a strict type to each variable.
Each has its advantages, with weak typing (or dynamic typing, as it is often called), being more flexible and requiring less code from the programmer. Strong typing, on the other hand, requires more work from the developer, but in return it can alert you of many mistakes when compiling your code, before you run it. Dynamic typing may delay the discovery of these simple problems until the code is executed.
Depending on the task at hand, weak typing may be better than strong typing, or vice versa, but it is mostly a matter of taste. Weak typing is commonly used in scripting languages, while strong typing is used in most compiled languages.

Interactive math proof system

I'm looking for a tool (GUI preferred but CLI would work) that allows me to input math expressions and then perform manipulations of them but restricts me to only mathematically valid operations. Also, the tool must be able to save a session and later prove that the given set of saved operations is valid.
Note: I am Not looking for a system to generate proofs, only that check that the steps I manually specify are valid.
I have used ACL2 for similar operations and it does well for some cases but it is very hard to use for everything else.
This little project is my motivation. It is a D template type that allows for equation solving. Given this equation:
(A * B) = C + D / F;
Any one of the symbols can be set as unknown and evaluating that expression will result an an assignment to that variable. It works by building expression trees into the type and then using rewrite rules to convert it to something that can be eventuated for the unknown type.
What I need is some way to validate the rewrite rule. They can be validated by testing the assertion that given some relation is true, another one is also.
Several American proof assistants were mentioned already (usually with LISP syntax), so here is a Europe-centric list to complement that:
Coq
Isabelle
HOL4
HOL-Light
Mizar
All of them are notorious for TTY interfaces, but Coq and Isabelle provide good support for the Proof General / Emacs interface. Moreover, Coq comes with CoqIDE, which is based on OCaml/GTK an the on-board text widget. Recent Isabelle includes the Isabelle/jEdit Prover IDE, which is based on jEdit and augmented by semantic markup provided by the prover in real-time as the user types.
ACL2 is notorious -- we used to say it was an expert system, and so could only be used by experts, who had to learn from Warren Hunt, J Moore, or Bob Boyer. The thing you need to do in ACL2 is really really understand how the proof system itself works; then you can "hint" it in directions that reduce the search space.
There are several other systems that can help with this kind of thing, though, depending on what you're trying to do.
If you want to work with continuous math or number theory, the ideal is Mathematica. Problem is you can buy a used car for the same amount of money (unless you can qualify for an academic license, a far better deal.)
Something similar, and free, is Open Maxima, which is an extension of Macsyma. That page also points to several others like Axiom, that I've got no experience with.
For mathematical logic operations, there's PVS from SRI. They've got some other cool stuff like model-checking in the same framework.
There's ongoing research in this area, it's called "Theorem proving in computer algebra".
People are trying to merge the ease of use and power of computer algebra systems like Mathematica, Maple, ... with the logical rigor of proof systems. The problems are:
Computer algebra systems are not rigorous. They tend to forget side conditions such as that a divisor must not be 0.
The proof systems are hard and tedious to use (as you have discovered).
In addition to what Charlie Martin's links, you may also want to check out Maple. My experience with such software is about 5 years old, but I recall at the time finding Maple to be much more intuitive than Mathematica.
The lean prover is interactive through a JS gui.
An old and unmaintained system is 'Ontic':
http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/areas/kr/systems/ontic/0.html

Resources