Prerequisite knowledge for understanding definition of standard ml - standards

could anyone tell me what's the background theories in The Definition of Standard ML, found very interesting and beautiful, i did learn some sml a little, but i want more while don't know how to start (to understand TDSML)
3x in advance

For the old version of the Definition (SML'90) there actually was a separate book called "Commentary on Standard ML", which explained how to interpret the Definition. Both the SML'90 Definition and the Commentary are long out of print, but fortunately, are available as free PDFs.
The SML'90 Definition had some differences to SML'97, in particular regarding the module system. Overall, it was more complicated. But much of the Commentary should still apply, and if you have both versions side by side, it shouldn't be hard to figure out what's still relevant.

This is off the top of my head: In order to understand the methods employed in The Definition of Standard ML, you should some basic understanding of:
set theory
functions
first-order logic
type theory
Additionally, you should be able to read and understand inference rules of which the book makes extensive use.

Related

Do monads do anything other than increase readability and productivity?

I have been looking at monads a lot over the past few months (functors and applicative functors as well). I have been attempting to figure out when monads are useful in a general sense. If I am looking at a piece of code I ask, should I employ a specific monad or a stack via transformers? In my efforts I think I have found an answer but I want others input in case I have missed something. It appears to me that monads are useful for abstracting away specific plumbing to increase readabilty/the declaritive nature of a piece of code which can have a side affect of increasing productivity by requiring less code to write. The only exception I can find is the IO monad which attempts to keep a pure function pure in the face of IO. It doesn't appear that a given monad provides a solution to a problem that can't be acheived via other means. Am I missing something?
Does any feature beyond mere Turing-completeness provide a solution to a problem that can't be achieved via other means? Nope. All Turing-equivalent languages are different ways of expressing the same basic things. Since monads are built out of more fundamental building blocks, obviously that set of building blocks is able to do anything a monad can. When we talk about a language or feature "allowing" us to do something, we mean it allows us to express that thing naturally, in a way that's easy to understand.

Static code analysis tool for Common Lisp?

I'm busy learning Common Lisp, & I'm looking for a static code analysis tool that will help me develop better style & avoid falling into common traps.
I've found Lisp Critic and I think it looks good, but I was hoping that someone may be able to recommend some other tools, and / or share their experiences with them.
Given the dynamic nature of Lisp, static analysis is everything from tough to impossible, depending on the type of source code.
For some purposes I would recommend using the SBCL compiler. Check out its manual for what features it provides. One feature is some form of type inference. It provides also a lot of standard warnings for things like undeclared variables, type problems, calling functions with the wrong number of args, using undefined functions, violating the ANSI CL standard in various ways and more.
The best way to learn about good style is to read a lot of code and ask for others to review your code. This isn't something that's specific to Common Lisp.
I think that one gray tool is use lisp-critic, you can get some information
here:
http://quickdocs.org/lisp-critic/
and a remake that was done by #Xach
http://xach.com/lisp/

Short implementation examples of abstract interpretation

I am taking a course on abstract interpretation, but I haven't seen any examples of how the theory maps down to actual code.
I am looking for short code examples, where I preferably won't have to work with a whole compiler. The analysis doesn't have to be useful, I would just like to see an example where the analysis is derived and then implemented.
Does anyone know of any such examples, perhaps from a university course?
AI is based on a mathematic theory name Galois Connection. The theory is very simple:
Abstract the behaviour of the program.
Perform the analysis on the abstract level.
Galois connection: To relate the Actual and Abstract program.
This is the best tutorial I have seen so far about Abstract Interpretation:
There is this paper by Bertot
Structural abstract interpretation, A formal study using Coq
That gives a full implementation of an abstract interpreter for a simple toy language using the Coq Proof Assistant. I used this for a concrete reference, and found it useful, although a little hard going, which is to be expected given the subject matter. Coq is a great little piece of software.
I also came across in a Cousot paper:
A gentle introduction to formal verification of computer systems by abstract interpretation
rough details (but I am sure there will be useful citations for full details) of an implementation in Astrée, I am not familiar with Astrée, so didn't actually read that section, but I think it meets your criteria.
If you come across anymore, please let me know! Would especially like to see a prolog abstract interpreter.
Maybe this tool is also interesting for you:
Interproc Analyzer
It is an abstract analyzer for a very simple language, which however offers
interprocedural analyses. You can try out the analysis and get numerical invariants about the analyzed program. The source code is available (OCaml).
A really thorough and precise course, given by one of the "creators" of Abstract Interpretation, Patrick Cousot (already mentioned in one of the answers):
MIT course about Abstract Interpretation. The course also offers assignments, in OCaml.
There is MonoREIL, which comes with the recently open sourced tool BinNavi.
See here is a short intro.
Note that the context of the MonoREIL framework is not compilers but the analysis of binary code. Yet, it has been used for real world applications, see slide 34 ff of this introduction (which contains more formal background).

Mathematical notation of programming concepts

There are many methods for representing structure of a program (like UML class diagrams etc.). I am interested if there is a convention which describes programs in a strict, mathematical way. I am especially interested in the use of mathematical notation for this purpose.
An example: Classes are represented as sets (fields, properties) and functions (operating on the elements of sets). A parent class' fields are a subset of child class'. Functions are described in pseudocode which has to look like this and that...
I know that Z Notation has been used to some extent in the formal verification of software, such as the Tokeneer project.
Z Notation
Z Reference Manual
http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/0201558025
Yes, there is, Floyd-Hoare Logic.
There are a lot of way, but i think most of them are inconvenient for expressing the structure since the structure is often not expressable in default mathematical concepts. The main exception is of course functional programing languages. Think about folds (catamorphisme), groups, algebra's etc.
For imperative programming I know of the existence of Z, which uses (pure and extended) lambda calculus set theory and (first order) predicate logic. However, i dont think it's very convenient. The only upside of using mathematics to express structure is the fact that you can prove stuff about it. But if you want to do that, take a look at JML, Spec# or Eiffel.
Depends on what you're trying to accomplish, but going down this road with specific languages can get you into trouble.
For example, see the circle-ellipse discussion on C++ FAQ Lite.
This book applies the deductive method
to programming by affiliating programs
with the abstract mathematical
theories that enable them work. [...]
I believe that Elements of Programming by Alexander Stepanov and Paul McJones, is pretty close to what you are looking for.
Concepts
A concept is a description of
requirements on one or more types
stated in terms of the existence and
properties of procedures, type
attributes, and type functions defined
on the types.
Z, which has already been mentioned, is pretty much what you describe. There are some variants of it for object-oriented modelling, but I think you can get quite far with "standard Z's" schemas if you wish to model classes.
There's also Alloy, which is newer and inspired by Z. Its notation is perhaps a bit closer to object-orientation. It is also analysable, i.e. you can check the models you create whether they fulfill certain conditions, but it cannot prove that properties hold, just attempt to refute within a finite scope.
The article Dependable Software by Design is a nice introduction to Alloy and its ilk, along with a table of available similar tools.
You are looking for functional programming. There are several functional programming languages, and they are all based on a fundamental mathematical theory called the Lambda calculus. Programs written in a functional programming language such as LISP are a mathematical representation of themselves. ;-)
There is a mathematical language which actually describes a program or rather it's operations. You take the initial state and then transform this state until you reach the desired target state. The transformations yield the program code which must be executed.
See the Wikipedia article about Hoare logic.
The basic idea is that for every function (no matter if you put that into a class or into an old style function), you have a pre- and a post-condition. For example, the precondition can be that you have an array which has >= 0 elements. the post-condition is that every element[i] must by <= element[j] for every i <= j.
The usual description would be "the function sorts the array". But the mathematical terms allow you to transform the input (which must match the precondition) into the output (which must match the postcondition).
It's a bit unwieldy to use, especially for more complex programs but some of the examples are pretty impressive. Often, you get really compact code as the result which looks quite complex but works at first try.
I'd like to suggest Algebra of Programming. It's a calculational approach to programs, using Relational Algebra, and Galois Connections.
If you have further interest on this topic, you can find an amazing paper, here, by Shin-Cheng Mu, and José Nuno Oliveira (slides).
Using Relational Algebra and First-Order Logic, also has a nice synergy with Alloy, Functional Programming, and Design by Contract (easily applied to Object-Oriented Programming).

What are the main issues in designing an interpreter for a functional language?

Suppose I want to implement an interpreter for a functional language. I would like to understand the issues involved in doing so and suitable literature that is available. This is a new language that is in early design stages, that is why the question is broad in scope.
For the purpose of this discussion we can assume that the purpose of the language is not important and that its functional features can be changed (even drastically) if it makes a significant difference in the ease of writing an interpreter.
The MIT website has an online copy of Structure and Interpretation of Computer Programs as well as videos of the MIT 6.001 lectures using Scheme, recorded at HP in 1986. These form a great introduction to language design.
I would highly recommend Structure and Interpretation of Computer Programs (SICP) as a starting point. This book will introduce the idea of what it means to write an interpreter (and a compiler), and is generally a must-read for anybody designing languages.
Implementing an interpreter for a functional language isn't likely to be too much different from implementing an interpreter for any other general purpose language. There's lexical analysis, parsing, AST construction, semantic analysis, plus execution (for a pure interpreter) or code generation and optimisation (for a compiler, even compiling to bytecode like Java/Perl/Python). SICP will introduce the difference between "applicative order" and "normal order" evaluation, which may be important for you in a pure functional context.
For just about any language interpreter or compiler, the main issues are the same, I think.
You need to decide certain basic characteristics of the language (semantics, not syntax), and the bulk of the design of the thing follows from that.
For example, does your language have
a type system? If so, what sorts of
types does it have? Is it going to be
statically typed, dynamically typed,
duck-typed?
What sort of expressions are you
planning to support? Do you need to
define an order of operations? Will
you even have operators?
What will you use as the run-time
representation of the program? Will
you convert the text to a byte-code
representation, or an AST, or a
tokenized form of the source text?
There are toolkits available to help take some of the tedium out of the actual parsing of text (ANTLR and Bison, to name two), but I don't know of anything that helps with the actual interpretation part of the task. I'm sure somebody will suggest something.
The main issue is having a semantics for the language you're implementing -- with that, the implementation becomes straightforward. Otherwise, this question is incredibly broad and hard to answer.
I'd recommend Essentials of Programming Languages as a good complement to SICP, particularly if you're interested in interpreters: Official EOPL site. You may want to check out the third edition-- the site hasn't been updated for it yet.
Edit: spam prevention is making me choose between links, so the official page is now unheated. It's easily Google-able, though.

Resources