Is Date.now referential transparent? - functional-programming

DateTime.Now or Date.now is referential transparent?
This is one of the controversial topic in a functional programming article in Qiita.
First of all, we must be very careful since the word "referential transparent" is tricky word/concept in a sense, and a prominent discussion exists in
What is referential transparency?
The questioner states:
What does the term referential transparency mean? I've heard it described as "it means you can replace equals with equals" but this seems like an inadequate explanation.
A very typical explanation but the idea that typically leads us misunderstanding is as follows: (#2 answer of the above page by #Brian R. Bondy )
Referential transparency, a term commonly used in functional programming, means that given a function and an input value, you will always receive the same output. That is to say there is no external state used in the function.
Typical claims I always have heard and thought wrong is like this:
In a programming language, Date.now always returns a different value that corresponds the current time, and according to
given a function and an input value, you will always receive the same output.
therefore, Date.now is Not referential transparent!
I know some (functional) programmers firmly believe the above claim is trustworthy, however, #1 and #3 answer by #Uday Reddy explains as follows:
Any talk of "referential transparency" without understanding the distinction between L-values, R-values and other complex objects that populate the imperative programmer's conceptual universe is fundamentally mistaken.
The functional programmers' idea of referential transparency seems to differ from the standard notion in three ways:
Whereas the philosophers/logicians use terms like "reference", "denotation", "designatum" and "bedeutung" (Frege's German term), functional programmers use the term "value". (This is not entirely their doing. I notice that Landin, Strachey and their descendants also used the term "value" to talk about reference/denotation. It may be just a terminological simplification that Landin and Strachey introduced, but it seems to make a big difference when used in a naive way.)
Functional programmers seem to believe that these "values" exist within the programming language, not outside. In doing this, they differ from both the philosophers and the programming language semanticists.
They seem to believe that these "values" are supposed to be obtained by evaluation.
Come to think of it, "external state" is also tricky word/concept.
Referential transparency, a term commonly used in functional programming, means that given a function and an input value, you will always receive the same output. That is to say there is no external state used in the function.
Is "current time" "external state" or "external value"?
If we call "current time" is "external state", how about "mouse event"??
"mouse event" is not a state that should be managed by programming context, it's rather an external event.
given a function and an input value, you will always receive the same output.
So, we can understand as follows:
"current time" is neither "input value" nor "external value" nor "external state" and Date.now always returns the same output corresponds to the on-going event "current time".
If one still insists or want to call "current time" as a "value", again,
Functional programmers seem to believe that these "values" exist within the programming language, not outside. In doing this, they differ from both the philosophers and the programming language semanticists.
The value of "current time" never exists within the programming language, but only outside, and the value of "current time" outside obviously updates via not programming context but the real-world's time-flow.
Therefore, I understand Date.now is referential transparent.
I'd like to read your idea. Thanks.
EDIT1
In
What is (functional) reactive programming?
Conal Elliott #Conal also explains functional-reactive-programming (FRP).
He is the one of the earliest who develops FRP, and explaines like this:
FRP is about - “datatypes that represent a value ‘over time’ “
Dynamic/evolving values (i.e., values “over time”) are first class values in themselves.
In this FRP perspective,
Date can be seen as a first-class value "over time" that is immutable object on the Time axis.
.now is a property/function to address "the current time" within the Date
Therefore Date.time returns immutable and referential transparent value that represents our "current time".
EDIT2
(in JavaScript)
referential intransparent function
let a = 1;
let f = () => (a);
input of Function:f is none;
output of Function:f depends on a that depends on a context outside f
referential transparent function
let t = Date.now();
let f = (Date) => (Date.now());
Although, Date value resides in our physical world, Date can be seen as an immutable FRP first-class value "over time".
Since Date referred from any programming context is identical, we usually implicitly may omit Date as input value and simply like
let f = () => (Date.now());
EDIT3
Actually, I emailed to Conal Elliott #Conal who is one of the earliest developer of FRP.
He kindly replied and informed me there's a similar question here.
How can a time function exist in functional programming?
The questioner states:
So my question is: can a time function (which returns the current time) exist in functional programming?
If yes, then how can it exist? Does it not violate the principle of functional programming? It particularly violates referential transparency which is one of the property of functional programming (if I correctly understand it).
Or if no, then how can one know the current time in functional programming?
and, the answer by Conal Elliott #Conal in stackoverflow:
Yes, it's possible for a pure function to return the time, if it's given that time as a parameter. Different time argument, different time result. Then form other functions of time as well and combine them with a simple vocabulary of function(-of-time)-transforming (higher-order) functions. Since the approach is stateless, time here can be continuous (resolution-independent) rather than discrete, greatly boosting modularity. This intuition is the basis of Functional Reactive Programming (FRP).
Edit4
My appreciation for the answer by #Roman Sausarnes .
Please allow me to introduce my perspective for functional programming and FRP.
First of all, I think programming is fundamentally all about
mathematics, and functional programming pursuits that aspect. On the
other hand, imperative programming is a way to describe steps of
machine operation, that is not necessarily mathematics.
Pure functional programming like Haskel has some difficulty to handle
"state" or IO, and I think the whole problem come from "time".
"state" or "time" is pretty much subjective entity to us, human. We
naturally believe "time" is flowing or passing, and "state" is
changing, that is Naïve realism.
I think Naïve realism for "time" is fundamental hazard and reason for
all the confusion in programming community and very few discuss this
aspect. In modern physics, or even from Newton Physics we treat time
in pure mathematical way, so if we overview our world in physics way,
nothing should be difficult to treat our world with pure mathematical
functional programming.
So, I overview our world/universe is immutable like pre-recorded DVD,
and only our subjective view is mutable, including "time" or "state".
In programming, the only connection between the immutable universe and
our mutable subjective experience is "event". Pure functional
programming language such as Haskell, basically lacks this view,
although some insightful researchers including Cornel Elliott proceed FRP, but the majority still think FRP method is still minor or hard to use andm many of them treat mutable state as a matter of course.
Naturally, FRP is the only smart solution and especially Cornel Elliott as a founder applied this philosophical perspective and declare - first
class value "over time". Perhaps, unfortunately, many programmers
would not understand what he really meant since they are trapped by Naïve
realism, and find it difficult to view "time" is philosophically, or
physically immutable entity.
So, if they discuss "pure functional" or "referential transparency"
for the advantage of mathematical integrity/consistency, to me,
"Date.now" is naturally referential transparent within pure functional
programming, simply because "Date.time" access a certain point of
immutable time-line of immutable universe.
So What About referential transparency in Denotational semantics like #Reddy or #Roman Sausarnes disucusses?
I overview referential transparency in FP, especially in Haskell community is all about mathematical integrity/consistency.
Sure, maybe I could follow the updated definition of "referential transparency" by Haskell community, and practically, we judge the code is mathematically inconsistent if we judge it's not referential transparent, correct?
Actually, again,
How can a time function exist in functional programming?
A programmer questioned as follows:
So my question is: can a time function (which returns the current time) exist in functional programming?
If yes, then how can it exist? Does it not violate the principle of functional programming? It particularly violates referential transparency which is one of the property of functional programming (if I correctly understand it).
Or if no, then how can one know the current time in functional programming?
Consensus
violate the principle of functional programming
= violates referential transparency which is one of the property of functional programming
= Mathematically inconsistent!!
This is our common perception, correct?
In this question, many answered that "function returning the current time" is Not referential transparent especially in the definition of "referential transparency" by Haskell community, and many mentioned it's about mathematical consistency.
However only a few answered that "function returning the current time" is referential transparent, and one of the answer is from FRP perspective by Conal Elliott #Conal.
IMO, FRP, a perspective to handle a time-stream as a first-class immutable value "over time" is a correct manner with mathematical principle like Physics as I mentioned above.
Then how come "Date.now"/"function returning the current time" became referential intransparent by Haskell context?
Well, only explanation I can think of is the updated definition of "referential transparency" by Haskell community is somewhat wrong.
Event-driven & Mathematical integrity/consistency
I mentioned - In programming, the only connection between the immutable universe and
our mutable subjective experience is "event" or "Event-driven".
Functional Programming is evaluated in Event-driven manner, on the other hand, Imperative Programming is evaluated by steps/routine of
machine operation described in the code.
"Date.now" depends on "event", and in principle, "event" is unknown to the context of the code.
So, does event-driven destroy mathematical integrity/consistency? Absolutely not.
Mapping Syntax to Meaning - indexical(index finger)
C.S. Peirce introduced the term ‘indexical’ to suggest the idea of pointing (as in ‘index finger’). ⟦I⟧ ,[[here]],[[now]] ,etc.. 
Probably this is mathematically identical concept of "Monad", "functor" things in Haskell. In denotational semantics even in Haskell, [[now]] as the 'index finger' is clear.
Indexical(index finger) is subjective and so is Event-driven
[[I]] ,[[here]],[[now]] ,etc.. is subjective, and again, in programming, the only connection between the immutable objective universe and
our mutable subjective experience is "event" or "Event-driven"
Therefore, as long as [[now]] is bind to event declaration of "Event-driven" Programming, the subjective(context dependent) mathematical inconsistency never occurs, I think.
Edit5
#Bergi gave me an excellent comment:
Yes, Date.now, the external value, is referentially transparent. It always means "current time".
But Date.now() is not, it's a function call returning different numbers depending on external state. The problem with the referentially transparent "concept of current time" is that we cannot compute anything with it.
#KenOKABE: Seems to be the same case as Date.now().
The problem is that it does not mean the current time at the same time, but at different times - a program takes time to execute, and this is what makes it impure.
Sure, we could devise a referentially transparent Date.now function/getter that always returns the time of the start of the program (as if a program execution was immediate), but that's not how Date.now()/Date.Now work. They depend on the execution state of the program. – Bergi
I think we need to discuss it.
Date.now, the external value, is referentially transparent.
[[Date.now]] is as I mention in #Edit4, an indexical(index finger) that is subjective, but as long as it remains in indexical domain (without an execution/evaluation), it is referentially transparent, that we agreed on.
However, #Bergi suggests Date.now() (with an execution/evaluation) returns "different values" at different times, and which is no longer referentially transparent. That we have not agreed on.
I think this problem he has shown surely but only exist in Imperative Programming:
console.log(Date.now()); //some numeric for 2016/05/18 xx:xx:xx ....
console.log(Date.now()); //different numeric for 2016/05/18 xx:xx:xx ....
In this case, Date.now() is not referentially transparent, I agree.
However, in Functional Programming/Declarative Programming paradigm, we would never write like the above. We must write this:
const f = () => (Date.now());
and, this f is evaluated in some "event-driven" context. That is how a Functional-programming code behaves.
Yes, this code is identical to
const f = Date.now;
Therefore, in Functional Programming/Declarative Programming paradigm, Date.now or Date.now() (with an execution/evaluation) never has problem to return "different values" at different times.
So, again, as I mentioned in EDIT4, as long as [[now]] is bind to event declaration of "Event-driven" Programming, the subjective(context dependent) mathematical inconsistency never occurs, I think.

Okay, I'm going to take a stab at this. I'm not an expert on this stuff, but I've spent some time thinking about #UdayReddy's answers to this question that you linked to, and I think I've got my head wrapped around it.
Referential Transparency in Analytic Philosophy
I think you have to start where Mr. Reddy did in his answer to the other question. Mr. Reddy wrote:
The term "referent" is used in analytical philosophy to talk about the thing that an expression refers to. It is roughly the same as what we mean by "meaning" or "denotation" in programming language semantics.
Note the use of the word "denotation". Programming languages have a syntax, or grammar, but they also have a semantics, or meaning. Denotational semantics is the practice of translating a language's syntax to its mathematical meaning.
Denotational semantics, as far as I can tell, is not widely understood even though it is one of the most powerful tools around for understanding, designing, and reasoning about computer programs. I gotta spend a little time on it to lay the foundation for the answer to your question.
Denotational Semantics: Mapping Syntax to Meaning
The idea behind denotational semantics is that every syntactical element in a computer language has a corresponding mathematical meaning, or semantics. Denotational semantics is the explicit mapping between syntax and semantics. Take the syntactic numeral 1. You can map it to its mathematical meaning, which is just the mathematical number 1. The semantic function might look like this:
syntax
↓
⟦1⟧ ∷ One
↑
semantics
Sometimes the double-square brackets are used to stand for "meaning", and in this case the number 1 on the semantic side is spelled out as One. Those are just tools for indicating when we are talking about semantics and when we are talking about syntax. You can read that function to mean, "The meaning of the syntactic symbol 1 is the number One."
The example that I used above looks trivial. Of course 1 means One. What else would it mean? It doesn't have to, however. You could do this:
⟦1⟧ ∷ Four
That would be dumb, and no-one would use such a dumb language, but it would be a valid language all the same. But the point is that denotational semantics allows us to be explicit about the mathematical meaning of the programs that we write. Here is a denotation for a function that squares the integer x using lambda notation:
⟦square x⟧ ∷ λx → x²
Now we can move on and talk about referential transparency.
Referential Transparency is About Meaning
Allow me to piggyback on Mr. Uday's answer again. He writes:
A context in a sentence is "referentially transparent" if replacing a term in that context by another term that refers to the same entity doesn't alter the meaning.
Compare that to the answer you get when you ask the average programmer what referential transparency means. They usually say something like the answer you quoted above:
Referential transparency, a term commonly used in functional programming, means that given a function and an input value, you will always receive the same output. That is to say there is no external state used in the function.
That answer defines referential transparency in terms of values and side effects, but it totally ignores meaning.
Here is a function that under the second definition is not referentially transparent:
var x = 0
func changeX() -> Int {
x += 1
return x
}
It reads some external state, mutates it, and then returns the value. It takes no input, returns a different value every time you call it, and it relies on external state. Meh. Big deal.
Given a correct denotational semantics, it is still referentially transparent.
Why? Because you could replace it with another expression with the same semantic meaning.
Now, the semantics of that function is much more confusing. I don't know how to define it. It has something to do with state transformations, given a state s and a function that produces a new state s', the denotation might look something like this, though I have no idea if this is mathematically correct:
⟦changeX⟧ ∷ λs → (s → s')
Is that right? I have don't have a clue. Strachey figured out the denotational semantics for imperative languages, but it is complicated and I don't understand it yet. By establishing the denotative semantics, however, he established that imperative languages are every bit as referentially transparent as functional languages. Why? Because the mathematical meaning can be precisely described. And once you know the precise mathematical meaning of something, you can replace it with any other term that has the same meaning. So even though I don't know what the true semantics of the changeX function is, I know that if I had another term with the same semantic meaning, I could swap one out for the other.
So What About Date.now?
I don't know anything about that function. I'm not even sure what language it is from, though I suspect it may be Javascript. But who cares. What is its denotational semantics? What does it mean? What could you insert in its place without changing the meaning of your program?
The ugly truth is, most of us don't have a clue! Denotational semantics isn't that widely used to begin with, and the denotational semantics of imperative programming languages is really complicated (at least for me - if you find it easy, I'd love to have you explain it to me). Take any imperative program consisting of more than about 20 lines of non-trivial code and tell me what its mathematical meaning is. I challenge you.
By contrast the denotational semantics of Haskell is pretty straightforward. I have very little knowledge of Haskell. I've never done any coding in it beyond messing around in the ghci, but what makes it so powerful is that the syntax tracks the semantics more closely than any other language that I know of. Being a pure, strict functional language, the semantics are right there on the surface of the syntax. The syntax is defined by the mathematical concepts that define the meaning.
In fact, the syntax and semantics are so closely related that functional programmers have begun to conflate the two. (I humbly submit this opinion and await the backlash.) That is why you get definitions of referential transparency from FPers that talk about values instead of meaning. In a language like Haskell, the two are almost indistinguishable. Since there is no mutable state and every function is a pure function, all you have to do is look at the value that is produced when the function is evaluated and you've basically determined its meaning.
It may also be that the new-age FPer's explanation of referential transparency is, in a way, more useful than the one that I summarized above. And that cannot be ignored. After all, if what I wrote above is correct then everything that has a denotational semantics is referentially transparent. There is no such thing as a non-referentially transparent function, because every function has a mathematical meaning (though it may be obscure and hard to define) and you could always replace it with another term with the same meaning. What good is that?
Well, it's good for one reason. It let's us know that we don't know jack about the mathematics behind what we do. Like I said above, I haven't a clue what the denotational semantics of Date.now is or what it means in a mathematical sense. Is it referentially transparent? Yeah, I'm sure that it is, since it could be replaced by another function with the same semantics. But I have no idea how to evaluate the semantics for that function, and therefore its referential transparency is of no use to me as a programmer.
So if there's one thing I've learned out of all of this, it is to focus a lot less on whether or not something meets some definition of "referential transparency" and a lot more on trying to make programs out of small, mathematically composable parts that have precise semantic meanings that even I can understand.

Is Date.now referentially transparent?
Here's a referentially-transparent random-number generator:
...so if Date.now was defined in similar fashion e.g:
int Date.now()
{
return 314159265358979; // think of it as
// a prototype...
}
then it would also be referentially transparent (but not very useful :-).

Related

Is functional programming a type of declarative programming?

I am aware that declarative programming just passes the input and expects the output without stating the procedure how it is done. In functional programming, is a programming paradigm, which takes an input and returns an output. When I checked the Higher order functional programming, we pass a function to map/reduce, which does not reveal the procedure how it is done. So is higher order functional programming and declarative programming the same thing??
Short answer: No.
Wikipedia defines declarative programming as:
In computer science, declarative programming is a programming
paradigm - a style of building the structure and elements of computer
programs - that expresses the logic of a computation without describing
its control flow.
Or to state it a bit boldly: "Say what you want, not how you want it.".
This is thus in contrast with imperative programming languages where a program is seen as a set of instructions that are done one after another. The fact that map, etc. do not reveal the procedure does not make it declarative: one can use a lot of C libraries that are proprietary and do not allow you to inspect the source code. That however, does not mean that these are declarative.
The definition of functional programming on the other hand is:
In computer science, functional programming is a programming paradigm
- a style of building the structure and elements of computer programs - that treats computation as the evaluation of mathematical functions and avoids changing-state and mutable data. It is a declarative
programming paradigm, which means programming is done with expressions
or declarations instead of statements.
Based on these definitions one could say that functional programming is a subset of declarative programming. In a practical sense however if we follow the strict definitions, no programming language nowadays is purely, and un-ambigously declarative or functional. One can however say that Haskell is more declarative than Java.
Declarative programming is usually considered to be "safer" since people tend to have trouble managing side-effects. A lot of programming errors are the result of not taking all side effects into account. On the other hand it is hard to
design a language that allows a programmer to describe what he wants without going into details on how to do it;
implement a compiler that will generate - based on such programs - an efficient implementation; and
some problems have inherent side effects. For instance if you work with a database, a network connection or a file system, then reading/writing to a file for instance is supposed to have side effects. One can of course decide not to make this part of the programming language (for instance many constraint programming languages do not allow these type of actions, and are a "sub language" in a larger system).
There have been several attempts to design such language. The most popular are - in my opinion - logic programming, functional programming, and constraint programming. Each has its merits and problems. We can also observe this declarative approach in for instance databases (like SQL) and text/XML processing (with XSLT, XPath, regular expressions,...) where one does not specify how a query is resolved, but simply specifies through for instance the regular expression what one is looking for.
Whether a programming language is however declarative, is a bit of a fuzzy discussion. Although programming languages, modeling languages and libraries like Haskell, Prolog, Gecode,... have definitely made programming more declarative, these are probably not declarative in the most strict sense. In the most strict sense, one should think that regardless how you write the logic, the compiler will always come up with the same result (although it might take a bit longer).
Say for instance we want to check whether a list is empty in Haskell. We can write this like:
is_empty1 :: [a] -> Bool
is_empty1 [] = True
is_empty1 (_:_) = False
We can however write it like this as well:
is_empty2 :: [a] -> Bool
is_empty2 l = length l == 0
Both should give the same result for the same queries. If we however give it an infinite list, is_empty1 (repeat 0) will return False whereas is_empty2 (repeat 0) will loop forever. So that means that we somehow still wrote some "control flow" into the program: we have defined - to some extent - how Haskell should evaluate this. Although lazy programming will result in the fact that a programmer does not really specify what should be evaluated first, there are still specifications how Haskell will evaluate this.
According to some people, this is the difference between programming and specifying. One of my professors once stated that according to him, the difference is that when you program something, you have somehow control about how something is evaluated, whereas when you specify something, you have no control. But again, this is only one of the many definitions.
Not entirely, functional programming emphasises more on what to compute rather than how to compute. However, there are patterns available in functional programming that are pretty much control flow patterns you would commonly associate with declarative programming, take for example the following control flow:
let continue = ref true in
while !continue do
...
if cond then continue := false
else
...
done
Looks familiar huh? Here you can see some declarative constructs but this time round we are in more control.

completely replace the inner syntax in isar?

I am interested in using Isar as a meta language for writing formal proofs about J, an executable math notation and programming language, and I'd like to be able to use J as the inner syntax.
J consists of a large number of primitives, and assigns (multiple!) meanings to every ASCII character, including single and double quotes.
Where can I find documentation or example code for implementing a completely new inner syntax? Or is this even possible? (I've been looking around in the src/ directory, but it's somewhat overwhelming and I'm not entirely sure what I'm looking for.)
Answer B: Building on HOL, with an Improvised J Syntax
Clarification is good, but I don't like to do the handshaking necessary to do it.
My first answer below was largely based on your phrase, "a completely new syntax", and I think it's half of an answer to a question like this:
Suppose, hypothetically, that I need syntax that's very close to the the syntax of J. What would that require, with regards to Isabelle/HOL?
My answer:
Most likely, I'd say you would have to undefine much of the syntax for the constants, functions, and type classes of Isabelle/HOL, which would require that you do extensive editing of the standard Isabelle/HOL distribution, to get it back working. And some syntax in Isabelle/HOL, you most likely wouldn't be able to take out.
Or, you would have to start fresh, with an import of Pure as a starting point. Please see my first answer below.
Just Syntax? Now we're back in normal user space
The customization of syntax in Isabelle/HOL makes us all a potential True Artiste.
There are advanced ways to tap into the power of defining syntax, such as parse_translation, with Isabelle/ML, but I don't use advanced methods. I use a few basic keywords to define the syntax: notation, no_notation, syntax, and translations, along with abbreviation, when either I want to rearrange the input arguments of a functions, or I don't want to mess up the notation for a standard HOL function.
notation, no_notation, the easy ones
I don't use no_notation a lot, but you need it in your arsenal. For an example, see Can I overload the notation for operators that are assigned to bool and list?.
The use of notation is easy, once you see a few examples.
For an infix operator, plus :: 'a => 'a => 'a, here are some examples:
notation plus (infixl "[+]" 65)
notation (input) plus (infixl "[+]" 65)
notation (output) plus (infixl "[+]" 65)
With that example, I entered into the realm of possibly messing up the notation for plus, which is an operator for a standard, HOL type class.
The line from above that won't mess up the output display is the line that uses (input).
For notation, to find examples, do greps in THY files or on the src/HOL folder, because there are too many variations to give you lots of examples here.
abbreviation, and not messing other things up
Suppose I want a really tight binding for the standard even predicate. I could do something like this:
notation (input) even ("even _" [1000] 1000)
notation (output) even ("even _" [1000] 999)
I say "could", because I don't know how that will mess up the standard function application of even, so I wouldn't want to do that.
Why the 999? It's just from trial and error, and from experience, where I know that this next line alone messes up declare[[show_brackets]]:
notation even ("even _" [1000] 1000)
That's the way it is with defining syntax. It's a combination of trial and error, finding examples for use as templates, experience, and noticing later on that you messed something up.
I forget all the things that abbreviation helps me out with. An innovative use of abbreviation can keep you from having to use more complicated methods.
You could use it to rearrange arguments, for some notational purpose:
abbreviation list_foo :: "'a list => 'a => 'a list" where
"list_foo xs x == x # xs"
notation
list_foo ("_ +#+ _" [65, 65] 64)
That example is an example of several examples. I was just trying to make a quick example, and I had something like (infixl "_ +#+ _" [65, 65] 64). There's not a lot of variation in how I define notation, so I had to find an example in Set.thy to show me that I needed to take out the infixl, since I wanted to use [65, 65] 64 as a variation on how you can define syntax.
Did I get the priorities right with [65, 65] 64? I have no idea. It's just for a quick example.
syntax and translations
You have to have it in your arsenal, but it will cause you a lot of time-consuming grief. Do greps and find examples. Try this and that. When you stumble on something that works, and you think you need it, then save it somewhere. If you don't, and you make a small change that breaks what you had, and you didn't save what you had that worked, you will regret having to spend a lot of time trying to get back to what worked.
The Isar Reference Manual, isar-ref.pdf#175 has a little info. Also, you can look up the use of notation in that PDF.
The unasked for part of Answer Part B
In your comment, you say this:
I already do have a "logic of programming" that I want to implement (cs.utoronto.ca/~hehner/FMSD) and J is a language that's especially well suited for formal proofs. I'm just trying to figure out how to re-use Isabelle's logic infrastructure rather than writing my own.
A short, unsafe answer, from anybody, for a question like this, even hedged, is like:
You most likely can't do, in Isabelle/HOL, what you're wanting to do with J.
A safer, short answer is like this:
Most likely, you will have major problems trying to do what you're wanting to do with J in Isabelle/HOL.
Those are short, quick answers. How can an answer to a question like this be short, if it actually tries to address the why?
It ends up being a "given everything I know" answer, because many times it's not that it can't be done, but that the right group of people, given a long enough period of time, given the right technology, haven't yet done it.
My headings below become my points. I try to blow through the rest fairly quickly, but still document things.
By you using HOL as your logic, my original answer still applies if slightly modified
The development of Isabelle/HOL into what it is today, starting with Robin Milner, is what I categorize as rocket science logic.
From all of my searches, and from all of my listening, it appears that there's still a lot of rocket science logic that needs to be developed before proof assistants can be used to formally verify any ole program written in any ole imperative programming language.
You have a logic, HOL, but you're implying that you're going to implement something similar to what a whole of lot people want, and have wanted for a long time.
What's below is to support what I say here.
J as a language well suited for formal proofs
There would be the traditional form of algorithm analysis, like Introduction to Algorithms, 3rd, by Cormen & Leiserson.
I'll call program proofs in Isabelle/HOL mechanized proofs and formally verified programs. I also consider certain pencil-and-paper proofs to be formal.
In traditional, non-mechanized proofs, then, yes, I guess J is a language well suited for formal proofs, which I say because you've told me it is. But then, big, popular programming languages, in particular C++ and Java, have textbooks written about them on the subject of formal, algorithm analysis. So, it must be, with traditional, non-mechanized proofs, they can also be reasoned about.
J in the context of mechanized proofs
No, it's not a language well-suited for formal, mechanized proofs. It uses (a better word than uses?) imperative programming, and it appears to be object oriented.
Largely, I'm just repeating things I've read others say. I'll start making statements as my personal conclusions. That will make things shorter.
Functional programming languages are good for formal proofs. Traditional programming involves mutating variables, and supposedly that bumps way up the difficulty of proofs.
I was searching for a statement about object oriented languages on the mailing list, but if you listen, people say they've done this or that special thing, but it's never something like, "Here's a complete development and formalization that easily allows you to verify programs written in general-purpose programming language X".
Formal proof, among other things, is about a set of axioms being enforced, where the selection of the axioms is the result of rocket science logic over a number of years, because the norm is not for a seemingly desirable set of axioms to be logically consistent.
For formal verification, you don't get to bypass the enforcement of the axioms. In textbooks, number constants just show up and get used, and they reason about them.
In formal proof, number constants, in particular the real numbers, are difficult to use. Ask yourself, "What is a natural number, an integer, a rational number, and a real number constant in Isabelle/HOL?" Now, if you answered that question, then ask yourself, "How do I do proofs involving natural numbers, integers, rational numbers, and real numbers in Isabelle/HOL?"
Now, contrast those questions with the fact that number constants just show up in most textbooks, and get used. That's not the way it works in formal proof. There's no magical appearance of number systems and constants. There can be a little magic in the automation of proofs involving numbers, but I'm pretty sure I'm doomed if my plan ever becomes dependent on magic like that.
L4.verified (and AutoCorres)
There's the L4.verified project by NICTA. (Update: And at sel4.systems, with co-credit given to General Dynamics C4 Systems. A big-name company like GD being involved supports my thesis that formal verification of imperative programming languages is something that's been highly desired for a long time.)
A quote:
We chose an operating system kernel to demonstrate this: seL4. It is a small, 3rd generation high-performance microkernel with about 8,700 lines of C code.
Why so selective? Why not any ole C program? I guess verifying C is hard. NICTA, they're not a small, inexperienced, unfunded group.
(Update: There's also the related AutoCorres project at NICTA, with its Quickstart Guide PDF. The release version is at v1.0, which was released on 2014-12-16. That must mean that they achieved the primary goal of whatever it was they were supposed to achieve. When I read their overview on the AutoCorres web page, I take it as supporting what I'm saying. It appears to me that they engage in some rocket science logic to get the C into another form, at least a little rocket science logic. I'm no authority on what constitutes rocket science logic. I think I'm safe in saying for sure that they're using PhD level logic to get their results.)
The book Practical Theory of Programming: where did number constants come from?
I downloaded the PDF for the book A Practical Theory of Programming.
One of the first things I started looking for in that book is "what are numbers and how are they formalized".
Number systems, we take them for granted, but they represent all that which is difficult about formal proof.
In a book, when number constants just show up, and just start getting used, it most likely means that there's no real formalization of the corresponding number systems. Why? Building up number system constants is extraordinarily involved.
If number constants weren't formally built up, there's no real formal proof there. If they do get built up formally, life is still not easy.
Here's something about the difficulty of working with real numbes: Larry Paulson's talk at NASA in 2014.
The book Practical Theory of Programming: while loops
The other thing I immediately started looking for was an example of a traditional loop, where you repeatedly modify a variable.
It starts at Section 5.2.0 While Loop, aPToP.pdf#76. The example is on the following page, Exercise 265:
while ¬ x = y = 0 do
if y > 0 then y := y - 1
else (x := x - 1. var· y := n)
There you go, a classic example of using mutable state (where I did searches on "mutable state" to actually see if I used the phrase correctly, with no clear conclusion).
You have a variable, and you're changing it's contents. That, so I hear, or so I conclude, represents why you're doomed when it comes to wanting to verify programs you write in J.
It's not that I want you to be doomed. When you put up on GitHub "The Formalization of the J Programming Language in Isabelle/HOL - with Many Demonstrations Showing the Ease with which J Programs Can Be Formally Verified", I'll be there.
Coq. What's out there for imperative programming?
I have this hunch that Coq would be better, if my main application was programming.
I keep the requirements minimal, by doing a Google search on coq imperative.
The first link is Ynot.
Does this support your idea that you should be able to take J and implement it in Isabelle/HOL?
Not to me. It supports my idea that if someone, who knows a lot, and gets to make a design decision about the language they're going to use, then they can do formal verification of imperative programs in a proof assistant.
You, on the other hand, first pick the programming language, and then are now going to mold a proof assistant around it.
My interest about J, on a scale from 0 to 10
At this point, my interest in J is basically 0, on a scale from 0 to 10.
Suppose, though, you put up a web site, "How It's Going with That J Thing", and I subscribe to it with a RSS reader.
It's not that I don't want you to formally verify J programs in Isabelle/HOL, it's that I don't think you'll be able to do it, and so there's no reason for me to care about it, since I don't need it.
However, if I saw new activity in my RSS reader for your site, and it told me you succeeded, and you put your code up on GitHub, then my interest goes to 10. Someone doing formalization for a full-blown programming language in Isabelle/HOL, where proofs can be decently implemented, like for functional programming, and not just for a small subset of the language, that's something to be interested in.
Original Answer
Four days have passed, it's the holiday period, and the experts might not show up, so I give you my answer.
I try to get to the short answer as quick as possible, but I say a few things first (actually, a lot of things), to try and give my quick answer some support.
I don't think you're using the Isabelle vocabulary quite right ("inner syntax"), but I take two phrases of yours, with my bold emphasis added:
I am interested in using Isar as a meta language for writing formal proofs about J...
Where can I find documentation or example code for implementing a completely new inner syntax?
I'm not one to want to spend time clarifying, so here's what I take as your requirements, where I add a few details, from having listened to the experts, and figuring out a few things for myself, based on what they've said:
You want a logic which can be used to reason about programs you've written in J, where you use the minimal logic of Isabelle/Pure as your starting point (because you need the complete syntax of J, and want to start fresh).
You want to define syntax, using Isabelle/Isar, which implements (or models?) the complete syntax and functionality of J. (You didn't say that you only wanted to reason about a subset of the syntax and functionality of J.)
Unfortunately, my short answer is not completely set up.
To try to get you to realize what you're asking for, I now quote from the main J web page, where the emphasis is mine:
J is a modern, high-level, general-purpose, high-performance programming language.
I rephrase now general-purpose as full-blown, like C, like Pascal, like many high-level, general-purpose programming languages, and I remind you that you want two things:
A logic in Isabelle, which surely has to be comparable in sophistication, in features, and in power to the logic of Isabelle/HOL.
The syntax and use (or modeling?) of a full-blown programming language, J, in Isabelle, starting with Isabelle/Pure, where your implementation surely has to be
a little comparable in sophistication and power to the code generator of Isabelle/HOL, which can export code for 5 programming languages, SML, OCaml, Haskell, Scala, and Eval (Isabelle/ML),
and comparable in power to the logic engine of Isabelle/HOL, which implements (or models?) high-level, functional programming constructs such as definition, primrec, datatype, and fun, which let a person define functions and new datatypes, along with the standard library of Isabelle/HOL types, such as pairs, lists, etc.
Now, what I claim, as my personal conclusion, is that what you want to implement is at least as difficult to implement as Isabelle/HOL, which is the result of a large number of people, done over many years.
Please consider what Peter Lammich had to say on the Isabelle user's list in I need a fixed mutable array:
HOL itself does not support mutable arrays.
However, there is Imperative_HOL, which has a heap monad supporting
mutable arrays.
Then there is afp/Collections/Lib/Diff_Array, which provides an
implementation of arrays that behaves purely functional, but is
efficient if only the last version is accessed.
However, if you are not after efficient executability, but only
looking for an abstract model of a memory, it makes no sense using the
above types, as the efficiency comes at the price of additional
formalization overhead.
My point from the quote is that Isabelle/HOL, though powerful enough to be one of the leading competitors as a proof assistant, doesn't implement standard arrays in the main part of its logic, which you get when you import Complex_Main.
Let (L, P) be a pair, where L is the logic and P is the programming language. I want to talk about two pairs, (Isabelle/HOL, Haskell), and what you want, (x, J), where x is your yet determined logic.
There is a very close relationship between Isabelle/HOL and Haskell. For example, the type classes of Isabelle/HOL are advertised as Haskell-like type classes, and also, that Haskell is a pure functional programming language, and Isabelle/HOL is pure. I don't want to go further, because as a non-expert, I'm sure to say something that's not right.
The point I want to make is this:
Haskell is a full-blown programming language,
Isabelle/HOL is a powerful logic,
Haskell is one of the programming languages that can be exported from Isabelle/HOL,
but yet Isabelle/HOL doesn't implement (or model?) much of Haskell.
I don't want to talk as some authority. But from listening, my conclusion is: it's that logic thing. Apparently, it's much easier to implement programming languages than to develop logic to reason about programs.
The short answer is that, in my opinion, the example code that you're looking for is Isabelle/HOL, because though there are some examples in Isabelle2014/src of other logics, what I've quoted you as saying and wanting, and what I'm saying you're saying and wanting, is that you want and need a full blown logic, like Isabelle/HOL.
From here, I try to throw out a few quick ideas.
I like that car, but what I really want is liquid nitrogen for fuel
That's my joke.
You're talking to a senior engineer, who has worked in the industry for years, and has learned the expert knowledge that has accumulated in the automotive industry, over years and years, and you say, "I like that idea of a car, but my idea is to use a nitrogen fuel cell instead of gasoline. How would I do that?"
More logics in the Isabelle2014/src folder
The links under Theory libraries for Isabelle2014, on the distribution web page, match up with folders in the Isabelle2014/src folder.
In the src folder, you will see the folders CCL, Cube, CTT, and others.
I'm sure those are good for learning, though probably still difficult to understand, but those aren't what you've described. You're asking for a full blown implementation of something that models a programming language.
If the use of C/C++ is so big, then why isn't there something like you want for C/C++?
I guess there is, at least, sort of, for C. I found vcc.codeplex.com/. Again, I'm not an expert, so I don't want to be saying exactly what is out there, and what isn't.
My point here is that C and C++ have been around for a long time, and heavily used, and the link above shows that there are professionals which have, for a long time, been interested in verifying C programs, which makes a lot of sense.
But, after all these years, why isn't program verification an integral part of C/C++ programming?
From having listened to those here and there, and on the mailing list, and from listening to people like Martin Odersky, the Scala architect, they forever want to talk about mutable and immutable state, where traditional programming, like C, and I assume J, would be in the category of using mutable state, very much using it. Over time, I have heard a number of times that mutable state makes it difficult to reason about what a program does.
My point again is that it must be a lot easier to design programming languages, than to reason about programs.
Finally, a little source
If there had been some competition for this question, I might have been less verbose, though maybe not, though probably so, as in not even giving an answer.
My final point is a re-emphasis of points above. It pays to know a little history, and I start way before Church and Curry.
I know that Isabelle/HOL is the result of what started at Cambridge, with Robin Milner, the author of ML, then Mike Gordon of the HOL group, then Larry Paulson, the author of using Pure as minimal logic to define other logics, and then Tobias Nipkow teamed up with him to get HOL started as a logic in Isabelle, and then Makarius Wenzel put a higher-level syntax on it all, Isar (it's more than just syntactic sugar; it's fundamental to the feature of structured proofs), along with the PIDE frontend, and all along other people throughout the world have made numerous contributions, many from the big group at TUM, in Germany, but then there's CERN of Australia (update: CERN? that was no joke; I actually do know the difference between CERN and NICTA; the world, it's not an easy thing to talk about), and back to the European area, a certain Swiss establishment, ETH, and still more places spread around Germany and Austria, UIBK, and back over to England? Who did I leave out? Me, of course, and lots of others around the world.
The rambling point? It's that thing of you asking for something that embodies the expertise of an industry. It's not bad to ask for it. It's downright audacious, and I could be completely wrong in what I'm saying, and missed that folder in src, the HOWTO of Implementing Logic for General-Purpose Programming Languages, All in Ten Mostly Easy Steps, Send in Your $9.95 Now, or Euros if That's All You Got, You Do the Conversion, I Trust You, But Wait, There's More, Do a Change Directory to Isabelle2014/medicaldoctor and Learn How to Become a Brain Surgeon, Too.
That's another joke, I claim. Just a space filler, nothing much more.
Anyway, consider here lines 47 to 60 of HOL.thy:
setup {* Axclass.class_axiomatization (#{binding type}, []) *}
default_sort type
setup {* Object_Logic.add_base_sort #{sort type} *}
axiomatization where fun_arity: "OFCLASS('a ⇒ 'b, type_class)"
instance "fun" :: (type, type) type by (rule fun_arity)
axiomatization where itself_arity: "OFCLASS('a itself, type_class)"
instance itself :: (type) type by (rule itself_arity)
typedecl bool
judgment
Trueprop :: "bool => prop" ("(_)" 5)
Periodically, I've put in effort at understanding those few lines. For a long time, my starting point was typedecl bool, and I wasn't concerned with trying to understand what what was before that, other than that HOL.thy imports Pure.
Recently, in trying to figure out types and sorts in Isabelle, from having listened to the experts, I finally saw that this line is where we get something like x::'a::type:
setup {* Object_Logic.add_base_sort #{sort type} *}
Another point? I'm back to what I said earlier. Because you want full-blown, your example is Isabelle/HOL, but yet just the first 57 lines of HOL.thy aren't easy to understand. But if you don't start with HOL, where are you going to look? Well, if what you find ends up being easy, there's a good chance it's partly because hundreds of people, over many years, didn't put their effort into the best way to start things out.
Or, it could have just been the 3 people listed as authors, Nipkow, Wenzel, and Paulson. In any case, there's still years of experience and education behind what's in there, even though HOL.thy is not that long, only 2019 lines. Of course, to understand what's in HOL.thy, you have to at least have a vague understanding of what Pure is.
Take a look at the src/Cube folder. It's one of the example logics that I mentioned above.
There are only two files, Cube.thy and Example.thy. It should be easy enough, but then that's the problem, it's too easy. It's not going to reflect the sophistication of Isabelle/HOL.
Your problems aren't my problem. Isabelle/HOL is good for reasoning about mathematics, like its ability to abstract operators with type classes. And it's good for more, like defining functions using functional programming, to be exported for OCaml, Haskell, SML, Haskell, and Eval.
I'm just a beginner, that's all I am. If there's a better answer, then I hope it gets put forth by someone.
A few notes on the original question:
Outer syntax is the theory and proof language of Isar; to change it you define additional commands. You are subject to general types of theory content, like theory, local_theory, Proof.context, but these types are very flexible and can assimilate arbitrary ML data that is specific to your application.
Inner syntax is the type/term language of the logic, i.e. Pure for the framework and HOL for applications (or any other logic that you prefer, although HOL is so advanced today, that you should not ignore it without really good reasons). Ultimately you spell-out simple-typed lambda terms.
Both for outer and inner syntax you are subject to certain notions of tokens (identifiers, quoted strings etc.). Your language needs to conform to that, if it is meant to co-exist directly with the existing syntax framework.
It is nonetheless possible to embed totally different languages into outer and inner syntax of Isabelle, by using quotations. E.g. see the document preparation language that is based on LaTeX and is delimited by funny {* ... *} markers for verbatim text. More basic quotations use " ... " simular to ML string syntax. Inside the inner syntax, '' ... '' (double single quotes) do a similar job.
In Isabelle2014 there is a new syntactic device of text cartouches that makes this work a bit more smoothly. E.g. see the examples in Isabelle2014/src/HOL/ex/Cartouche_Examples.thy which explore a bit some possibilities.
Another current example from Isabelle2014 is the rail language inside Isabelle document source: it may serve as almost stand-alone example of a "domain-specific formal language" defined from scratch. E.g. see Isabelle2014/src/Doc/Isar_Ref/Document_Preparation.thy and look at the various uses of #{rail ...} -- the implementation of that is in Isabelle2014/src/Pure/Tools/rail.ML -- a file of finite size to be studied carefully to learn more.

Do purely functional languages really guarantee immutability?

In a purely functional language, couldn't one still define an "assignment" operator, say, "<-", such that the command, say, "i <- 3", instead of directly assigning the immutable variable i, would create a copy of the entire current call stack, except replacing i with 3 in the new call stack, and executing the new call stack from that point onward? Given that no data actually changed, wouldn't that still be considered "purely functional" by definition? Of course the compiler would simply make the optimization to simply assign 3 to i, in which case what's the difference between imperative and purely functional?
Purely functional languages, such as Haskell, have ways of modelling imperative languages, and they are not shy about admitting it either. :)
See http://www.haskell.org/tutorial/io.html, in particular 7.5:
So, in the end, has Haskell simply
re-invented the imperative wheel?
In some sense, yes. The I/O monad
constitutes a small imperative
sub-language inside Haskell, and thus
the I/O component of a program may
appear similar to ordinary imperative
code. But there is one important
difference: There is no special
semantics that the user needs to deal
with. In particular, equational
reasoning in Haskell is not
compromised. The imperative feel of
the monadic code in a program does not
detract from the functional aspect of
Haskell. An experienced functional
programmer should be able to minimize
the imperative component of the
program, only using the I/O monad for
a minimal amount of top-level
sequencing. The monad cleanly
separates the functional and
imperative program components. In
contrast, imperative languages with
functional subsets do not generally
have any well-defined barrier between
the purely functional and imperative
worlds.
So the value of functional languages is not that they make state mutation impossible, but that they provide a way to allow you to keep the purely functional parts of your program separate from the state-mutating parts.
Of course, you can ignore this and write your entire program in the imperative style, but then you won't be taking advantage of the facilities of the language, so why use it?
Update
Your idea is not as flawed as you assume. Firstly, if someone familiar only with imperative languages wanted to loop through a range of integers, they might wonder how this could be achieved without a way to increment a counter.
But of course instead you just write a function that acts as the body of the loop, and then make it call itself. Each invocation of the function corresponds to an "iteration step". And in the scope of each invocation the parameter has a different value, acting like an incrementing variable. Finally, the runtime can note that the recursive call appears at the end of the invocation, and so it can reuse the top of the function-call stack instead of growing it (tail call). Even this simple pattern has almost all of the flavour of your idea - including the compiler/runtime quietly stepping in and actually making mutation occur (overwriting the top of the stack). Not only is it logically equivalent to a loop with a mutating counter, but in fact it makes the CPU and memory do the same thing physically.
You mention a GetStack that would return the current stack as a data structure. That would indeed be a violation of functional purity, given that it would necessarily return something different each time it was called (with no arguments). But how about a function CallWithStack, to which you pass a function of your own, and it calls back to your function and passes it the current stack as a parameter? That would be perfectly okay. CallCC works a bit like that.
Haskell doesn't readily give you ways to introspect or "execute" call stacks, so I wouldn't worry too much about that particular bizarre scheme. However in general it is true that one can subvert the type system using unsafe "functions" such as unsafePerformIO :: IO a -> a. The idea is to make it difficult, not impossible, to violate purity.
Indeed, in many situations, such as when making Haskell bindings for a C library, these mechanisms are quite necessary... by using them you are removing the burden of proof of purity from the compiler and taking it upon yourself.
There is a proposal to actually guarantee safety by outlawing such subversions of the type system; I'm not too familiar with it, but you can read about it here.
Immutability is a property of the language, not of the implementation.
An operation a <- expr that copies data is still an imperative operation, if values that refer to the location a appear to have changed from the programmers point of view.
Likewise, a purely functional language implementation may overwrite and reuse variables to its heart's content, as long as each modification is invisible to the programmer. For example, the map function can in principle overwrite a list instead of creating a new, whenever the language implementation can deduce that the old list won't be needed anywhere.

What are the alternative of monads to use IO in pure functional programming?

monads are described as the haskell solution to deal with IO. I was wondering if there were other ways to deal with IO in pure functional language.
What alternatives are there to monads for I/O in a pure functional language?
I'm aware of two alternatives in the literature:
One is a so-called linear type system. The idea is that a value of linear type must be used exactly one time: you can't ignore it, and you can't use it twice. With this idea in mind, you give the state of the world an abstract type (e.g., World), and you make it linear. If I mark linear types with a star, then here are the types of some I/O operations:
getChar :: World* -> (Char, World*)
putChar :: Char -> World* -> World*
and so on. The compiler arranges to make sure you never copy the world, and then it can arrange to compile code that updates the world in place, which is safe because there is only one copy.
The uniqueness typing in the language Clean is based on linearity.
This system has a couple of advantages; in particular, it doesn't enforce the total ordering on events that monads do. It also tends to avoid the "IO sin bin" you see in Haskell where all effectful computations are tossed into the IO monad and they all get totally ordered whether you want total order or not.
The other system I'm aware of predates monads and Clean and is based on the idea that an interactive program is a function from a (possibly infinite) sequence of requests to a (possibly infinite) sequence of responses. This system, which was called "dialogs", was pure hell to program. Nobody misses it, and it had nothing in particular to recommend it. Its faults are enumerated nicely in the paper that introduced monadic I/O (Imperative Functional Programming) by Wadler and Peyton Jones. This paper also mentions an I/O system based on continuations which was introduced by the Yale Haskell group but which was short-lived.
Besides linear types, there's also effect system.
If by "pure" you mean "referentially transparent", that is, that an applied function is freely interchangeable with its evaluated result (and therefore that calling a function with the same arguments has the same result every time), any concept of stateful IO is pretty much excluded by definition.
There are two rough strategies that I'm aware of:
Let a function do IO, but make sure that it can never be called twice with the exact same arguments; this side-steps the issue by letting the functions be trivially "referentially transparent".
Treat the entire program as a single pure function taking "all input received" as an argument and returning "all output produced", with both represented by some form of lazy stream to allow interactivity.
There are a variety of ways to implement both approaches, as well as some degree of overlap--e.g., in the second case, functions operating on the I/O streams are unlikely to be called twice with the same part of the stream. Which way of looking at it makes more sense depends on what kind of support the language gives you.
In Haskell, IO is a type of monad that automatically threads sequential state through the code (similar to the functionally pure State monad), such that, conceptually, each call to an otherwise impure function gets a different value of the implicit "state of the outside world".
The other popular approach I'm aware of uses something like linear types to a similar end; insuring that impure functions never get the same arguments twice by having values that can't be copied or duplicated, so that old values of the "state of the outside world" can't be kept around and reused.
Uniqueness typing is used in Clean
Imperative Functional Programming by Peyton Jones and Wadler is a must read if you are interested in functional IO. The other approaches that they discuss are:
Dialogues which are lazy streams of responses and requests
type Dialogue = [Response] -> [Request]
main :: Dialogue
Continuations - each IO operation takes a continuation as argument
Linear types - the type system restricts you in a way that you cannot copy or destroy the outside state, which means that you can't call a function twice with the same state.
Functional Reactive Programming is another way to handle this.
I was wondering if there were other ways to deal with IO in a pure functional language.
Just adding to the other answers already here:
The title of this paper says it all :-)
You could also look at:
Rebelsky S.A. (1992) I/O trees and interactive lazy functional programming. In: Bruynooghe M., Wirsing M. (eds) Programming Language Implementation and Logic Programming. PLILP 1992. Lecture Notes in Computer Science, vol 631. Springer, Berlin, Heidelberg
When Haskell was young, Lennart Augustsson wrote of using system tokens as the mechanism for I/O:
L. Augustsson. Functional I/O Using System Tokens. PMG Memo 72, Dept Computer Science, Chalmers University of Technology, S-412 96 Göteborg, 1989.
I've yet to find a online copy but I have no pressing need for it, otherwise I suggest contacting the library at Chalmers.

Mathematical notation of programming concepts

There are many methods for representing structure of a program (like UML class diagrams etc.). I am interested if there is a convention which describes programs in a strict, mathematical way. I am especially interested in the use of mathematical notation for this purpose.
An example: Classes are represented as sets (fields, properties) and functions (operating on the elements of sets). A parent class' fields are a subset of child class'. Functions are described in pseudocode which has to look like this and that...
I know that Z Notation has been used to some extent in the formal verification of software, such as the Tokeneer project.
Z Notation
Z Reference Manual
http://www.amazon.com/Concrete-Mathematics-Foundation-Computer-Science/dp/0201558025
Yes, there is, Floyd-Hoare Logic.
There are a lot of way, but i think most of them are inconvenient for expressing the structure since the structure is often not expressable in default mathematical concepts. The main exception is of course functional programing languages. Think about folds (catamorphisme), groups, algebra's etc.
For imperative programming I know of the existence of Z, which uses (pure and extended) lambda calculus set theory and (first order) predicate logic. However, i dont think it's very convenient. The only upside of using mathematics to express structure is the fact that you can prove stuff about it. But if you want to do that, take a look at JML, Spec# or Eiffel.
Depends on what you're trying to accomplish, but going down this road with specific languages can get you into trouble.
For example, see the circle-ellipse discussion on C++ FAQ Lite.
This book applies the deductive method
to programming by affiliating programs
with the abstract mathematical
theories that enable them work. [...]
I believe that Elements of Programming by Alexander Stepanov and Paul McJones, is pretty close to what you are looking for.
Concepts
A concept is a description of
requirements on one or more types
stated in terms of the existence and
properties of procedures, type
attributes, and type functions defined
on the types.
Z, which has already been mentioned, is pretty much what you describe. There are some variants of it for object-oriented modelling, but I think you can get quite far with "standard Z's" schemas if you wish to model classes.
There's also Alloy, which is newer and inspired by Z. Its notation is perhaps a bit closer to object-orientation. It is also analysable, i.e. you can check the models you create whether they fulfill certain conditions, but it cannot prove that properties hold, just attempt to refute within a finite scope.
The article Dependable Software by Design is a nice introduction to Alloy and its ilk, along with a table of available similar tools.
You are looking for functional programming. There are several functional programming languages, and they are all based on a fundamental mathematical theory called the Lambda calculus. Programs written in a functional programming language such as LISP are a mathematical representation of themselves. ;-)
There is a mathematical language which actually describes a program or rather it's operations. You take the initial state and then transform this state until you reach the desired target state. The transformations yield the program code which must be executed.
See the Wikipedia article about Hoare logic.
The basic idea is that for every function (no matter if you put that into a class or into an old style function), you have a pre- and a post-condition. For example, the precondition can be that you have an array which has >= 0 elements. the post-condition is that every element[i] must by <= element[j] for every i <= j.
The usual description would be "the function sorts the array". But the mathematical terms allow you to transform the input (which must match the precondition) into the output (which must match the postcondition).
It's a bit unwieldy to use, especially for more complex programs but some of the examples are pretty impressive. Often, you get really compact code as the result which looks quite complex but works at first try.
I'd like to suggest Algebra of Programming. It's a calculational approach to programs, using Relational Algebra, and Galois Connections.
If you have further interest on this topic, you can find an amazing paper, here, by Shin-Cheng Mu, and José Nuno Oliveira (slides).
Using Relational Algebra and First-Order Logic, also has a nice synergy with Alloy, Functional Programming, and Design by Contract (easily applied to Object-Oriented Programming).

Resources