As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Which FP language follows lambda calculus the closest in terms of its code looking, feeling, acting like lambda calculus abstractions?
This might not be a real answer, it's more of a guess about what you actually want.
In general, there's very little in the lambda calculus -- you basically need (first-class) functions, function applications, and variables. These days you'll have hard time finding a language that does not provide you with these things... However, things can get confusing when you're trying to learn about it -- for example, it's very easy to just use plain numbers and then get them mixed up with church numerals. (I've seen this happen with many students, adapting to the kind of formal thinking that you need for this material is hard enough that throwing encodings onto the pile doesn't really help...)
As Don said, Scheme is very close to the "plain" untyped lambda calculus, and it's probably very fitting in your case if you're going through The Little Schemer. If you really want to use a "proper" LC, you need to make sure that you use only functions (problems as above); but there are some additional problems that you'll run into, especially when you read various other texts on the subject. First, most texts will use lazy evaluation which you don't get in Scheme. Second, since LC has only unary functions, it's very common to shorten terms and use, for example, λxyz.zxy instead of the "real" form which in this case is λx.(λy.(λz.((z x) y))) or (lambda (x) (lambda (y) (lambda (z) ((z x) y)))) in Scheme. (This is called Currying.)
So yes, Scheme is pretty close to LC, but that's not saying much with all of these issues. Haskell is arguably a better candidate since it's both lazy, and does that kind of currying for multiple arguments to functions. OTOH, you're dealing with a typed language which is a pretty big piece of baggage to bring into this game -- and you'll get in some serious mud if you try to do TLS-style examples...
If you do want to get everything (lazy, shorthands, untyped, close enough to Scheme), then Racket has another point to consider. At a high-level, it's very close to Scheme, but it goes much farther in that you can quickly slap up a language that is a restriction of the Racket language to just lambda expressions and function applications. With some more work, you can also get it to do currying and you can even make it lazy. That's not really an exercise that you should try doing yourself at this point -- but if it sounds like what you want, then I can point you to my course (look for "Schlac" in the class notes) where we use a language that is doing all of the above, and it's extremely restricted so you get nothing more than the basic LC constructs. (For example, 3 is an unbound identifier until you define it.) Note that this is not some interpreter -- it's compiled into Racket code which means that it runs fast enough that you can even write code that uses numbers. You can get the implementation for that language there too, and once you install that, you get this language if you start files with #lang pl schlac.
Lambda calculus is a very, very restricted programming model. You have only functions. No literals, no built in arithmetic operators, no data structures. Everything is encoded as functions. As such, most functional languages try to extend the lambda calculus in ways to make it more convenient for everyday programming.
Haskell uses a modern extension of lambda calculus as its core language: System F, extended with data types. (GHC has since extended this further to System Fc, supporting type equality coercions).
As all Haskell can be written directly in its core language, and its core language is an extension of typed lambda calculus (specifically, second-order lambda calculus), it could be said that Haskell follows lambda calculus closely, modulo its builtin operators for concurrency; parallelism; and memory side effects (and the FFI). This makes development of new compiler optimizations significantly easier, and also makes the semantics of a given program more tractable to understand.
On the other hand, Scheme is a variant of the untyped lambda calculus, extended with side effects and other non-lambda calculus concepts (such as concurrency primitives). It can be said to closely follow the untyped lambda calculus.
The only people that this matters to are: people learning the lambda calculus; and compiler writers.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
In quite a few of my more recent programs, I've been using basic calculus to replace if statements such as in a for loop I've used:
pos = cbltSz*(x-1) to get the position of a small cube relative to a large one rather than saying something like if(x == 0){pos = -cbltSz}. This is more or less to neaten up the code a little bit. But it got me thinking. To what extent would using maths out-perform pre-defined statements/ functions? and how much would it vary from language to language? This is assuming that my maths used is preferable to the alternative in a way other than aesthetic.
Modern CPUs have deep pipelines, so that a branch misprediction may come at a considerable performance impact. On the other hand, they tend to have considerable floating-point computation power, often being able to perform multiple floating-point operations in parallel on different ALUs. So there might be a performance benefit to this approach. But it all depends. If the application is already doing a lot of number crunching, the ALUs might be saturated. If the branch predictor does a good job, branch mispredictions may be rare in many applications.
So I'd go with the usual rules for optimizations: don't try to hand-optimize everything. Don't optimize at the cost of readability and maintainability. If you have a performance bottleneck, identify the hot portions of your codebase and consider alternatives for those. Try out alternatives in benchmarks, because theoretic considerations only get you so far.
Note that your statements pos = cbltSz*(x-1) and if(x == 0){pos = -cbltSz} are not equivalent if x is non-zero: the first sets pos to some definite value while the second leaves it to its previous value. The significance of this difference depends on the rest of your program. The two statements also differ in clarity--the second expresses your purpose better and the first does not "neaten up the code a little bit". In most programs, improved clarity is more important than a slight increase in speed.
So the answer to your question depends on too many factors to get a full answer here.
What most early programming language designers didn't understand, and many still don't understand, is that mathematical functions are a bit different from user-defined functions. If we've done any trig at all, we all know what sin(PI/8) is going to mean,and we're happy embedding the class in expressions like
rx = cos(theta) * x - sin(theta) * y;
But functions you write yourself are only seldom like basic mathematical functions. They take several parameters, they return several parameters, it's not usually quite clear what they do. The last thing you want is to embed them in complicated expressions.
Secondly, maths has its own system of notation for a reason. The cut down, ascii-only system of a programming language breaks down as soon as expressions go above a certain low complexity. (I use the rule of three, three levels of nested parentheses are all your user can take in). And, without special programming support, programming functions cannot escape their domain.
pow(sqrt(-1), sqrt(-1));
won't do what you want unless you have a complex math library installed.
As for performance, some Fortran and C compilers optimise mathematical routines very aggressively, with constant propagation and the like. Others won''t. It just depends.
I am looking for an accurate and understandable definition. The ones I have found differ from each other:
From a book on functional reactive programming
Denotational semantics is a mathematical expression of the
formal meaning of a programming language.
However, wikipedia refers to it as an approach and not a math expression
Denotational semantics is an approach of formalizing the meanings of
programming languages by constructing mathematical objects (called
denotations) that describe the meanings of expressions from the
languages
The term "denotational semantics" refers to both the mathematical meanings of programs and the approach of giving such meanings to programs. It is like, say, the word "history", which means the history of something as well as the entire research field on histories of things.
I've never found the definitions of the term "denotational semantics" useful for understanding the concept and its significance. Rather, I think it's best approached instead by considering the forms of reasoning that denotational semantics enables.
Specifically, denotational semantics enables equational reasoning with referentially transparent programs. Wikipedia gives this introductory definition of referential transparency:
An expression is said to be referentially transparent if it can be replaced with its value without changing the behavior of a program (in other words, yielding a program that has the same effects and output on the same input).
But a more precise definition wouldn't talk about replacing an expression with a "value", but rather replacing it with another expression. Then, referential transparency is the property where, if your replace parts with replacements that have the same denotation, then the resulting wholes also have the same denotation.
So IMHO, as a programmer, that's the key thing to understand: denotational semantics is about how to give mathematical "teeth" to the concept of referential transparency, so we can give principled answers to claims about correctness of substitution. In the context of functional programming, for example, one of the key applications is: when can we say that two function-valued expressions actually denote "the same" function, and thus either can safely substitute for the other? The classic denotational answer is extensional equality: two functions are equal if and only if they map the same inputs to the same outputs, so we just have to prove whether the expressions in question denote extensionally equivalent functions. So for example, Quicksort and Bubblesort are notably different arguments, but denotationally they are the same function.
In the context of reactive programming, the big question would be: when can we say that two different expressions nevertheless denote the same event stream or time-dependent value?
Basically I have created two MATLAB functions which involve some basic signal processing and I need to describe how these functions work in a written report. It specifically requires me to describe the algorithms using mathematical notation.
Maths really isn't my strong point at all, in fact I'm quite surprised I've even been able to develop the functions in the first place. I'm quite worried about the situation at the moment, it's the last section of writing I need to complete but it is crucially important.
What I want to know is whether I'm going to have to grab a book and teach myself mathematical notation in a very short space of time or is there possibly an easier/quicker way to learn? (Yes I know reading a book should be simple enough, but maths + short time frame = major headache + stress)
I've searched through some threads on here already but I really don't know where to start!
Although your question is rather vague, and I have no idea what sorts of algorithms you have coded that you are trying to describe in equation form, here are a few pointers that may help:
Check the MATLAB documentation: If you are using built-in MATLAB functions, they will sometimes give an equation in the documentation that describes what they are doing internally. Some examples are the functions CONV, CORRCOEF, and FFT. If the function is rather complicated, it may not have an equation but instead have links to some papers describing the algorithm, which may themselves have equations for the algorithm. An example is the function HILBERT (which you can also find equations for on Wikipedia).
Find some lists of common mathematical symbols: Some standard symbols used to represent common mathematical operations can be found here.
Look at some sample pseudocode to see how it's done: For algorithms you yourself have coded up, you'll have to write them out in equation or pseudocode form. A paper that I've used often in my work is Templates for the Solution of Linear Systems, and it has some examples of pseudocode that may be helpful to you. I would suggest first looking at the list of symbols used in that paper (on page iv) to see some typical notations used to represent various mathematical operations. You can then look at some of the examples of pseudocode throughout the rest of the document, such as in the box on page 8.
I suggest that you learn a little bit of LaTeX and investigate Matlab's publish feature. You only need to learn enough LaTeX to write mathematical expressions. Then you have to write Matlab comments in your source file in LaTeX, but only for the bits you want to look like high-quality maths. Finally, open the Matlab editor on your .m file, and select File | Publish.
See Very Quick Intro to LaTeX and check your Matlab documentation for publish.
In addition to the answers already here, I would strongly advise using words in addition to forumlae in your report to describe the maths that you are presenting.
If I were marking a student's report and they explained the concepts of what they were doing correctly, but had poor or incorrect mathematical notation to back it up: this would lose them some marks, but would hopefully not impede my understanding of the hard work they've put in.
If they had poor/wrong maths, with no explanation of what they meant to say, this could jeapordise my understanding of their entire project and cost them a passing grade.
The reason you haven't found any useful threads is because most of the time, people are trying to turn maths into algorithms, not vice versa!
Starting from an arbitrary algorithm, sometimes pseudo-code, along with suitable comments, is the clearest (and possibly only) representation.
I have a number of small algorithms that I would like to write up in a paper. They are relatively short, and concise. However, instead of writing them in pseudo-code (à la Cormen or even Knuth), I would like to write an algebraic representation of them (more linear and better LaTeX rendering) . However, I cannot find resources as to the best notation for this, if there is anything: e.g. how do I represent a loop? If? The addition of a tuple to a list?
Has any of you encountered this problem, and somehow solved it?
Thanks.
EDIT: Thanks, people. I think I did a poor job at phrasing the question. Here goes again, hoping I make it clearer: what is the common notation for talking about loops and if-then clauses in a mathematical notation? For instance, I can use $acc \leftarrow acc \cup \langle i,i+1 \rangle$ to represent the "add" method of a list.
Don't do this. You are deviating from what people expect to see when they read a paper about algorithms. You should follow expected practices; your ideas are more likely to get the attention that they deserve. When in Rome, do as the Romans do.
Formatting code (or pseudocode as it may be) in a LaTeXed paper is very easy. See, for example, Formatting code in LaTeX.
I see if-expressions in mathematical notation fairly often. The usual thing for a loop is a recurrence relation, or equivalently, a function defined recursively.
Here's how the Ackermann function is defined on Wikipedia, for instance:
This picture is nice because it feels mathematical in flavor and yet you could clearly type it in almost exactly as written and have an implementation. It is not always possible to achieve that.
Other mathematical notations that correspond to loops include ∑-notation for summation and set-builder notation.
I hope this answers your question! But if your aim is to describe how something is done and have someone understand, I think it is probably a mistake to assume that mathematicians would prefer to see equations. I don't think they're interchangeable tools (despite Turing equivalence). If your algorithm involves mutable data structures, procedural code is probably going to be better than equations for explaining it.
I'd copy Knuth. Few know how to communicate better than him in a computer science setting.
A symbol for general loops does not exist; usually you will use the summation operator. "if" is represented using implications, and to "add a tuple to a list" you would use union.
However, in general, a bit of verbosity is not necessarily a bad thing - sometimes, especially for complex algorithms, it is best to spell it out in plain English, using examples and diagrams. This is doubly-true for non-coders.
Think about it: when you read a math text-book on Euclid's algorithm for GCD, or the sieve of Eratosthenes, how is it written? Usually, the algorithm itself is in prose, while the proof of the algorithm is where the mathematical symbols lie.
You might take a look at Haskell. Haskell formats well in latex, has a nice algebraic syntax, and you can even compile a latex file with Haskell in it, provided the code is wrapped in \begin{code} and \end{code}. See here: http://www.haskell.org/haskellwiki/Literate_programming. There are probably literate programming tools for other languages.
Lisp started out as a mathematical notation of a computing model so that the lecturer would have a better tool than turing machines. By accident, it turns out that it can be implemented in assembly - thus lisp, the programming language was born.
But I don't think this is really what you are looking for since the computing model that lisp describes doesn't have loops: recursion is used instead. The syntax derives from algebra where braces denote evaluate-this-and-substitute-the-result. Indeed, lisp's model of computing is basically substitution - what algebra essentially is.
Indeed, most functional languages like Lisp, Haskell and Erlang are derived from mathematics. Haskell is actually a result of proving that lambda calculus can be used to implement type systems. So Haskell, like Lisp was born out of pure mathematics. But again, the syntax is not what you would probably be used to.
You can certainly explain Lisp and Haskell syntax to mathematicians and they would treat it as a "game". Language constructs like loops, recursion and conditionals can be proven out of the rules of the game rather than blindly implemented like in other languages. This would lead you into the realms of combinatronics, another branch of mathematics. Indeed, in combinatronics, even the concept of numbers can be constructed out of the rules of the game rather than being a native part of the language (google Church Numerals).
So have a look at Lisp/Scheme, Erlang and Haskell if you want. Erlang especially has syntax close to what you want:
add(a,b) -> a + b
But my recommendation is to write in C-like pseudocode. It's sort of the lowest common denominator in programming languages. Has a syntax that is fairly easy to understand and clean. And the function syntax even derives from functions in mathematics. Remember f(x)?
As a plus, mathematicians are used to writing C, statisticians are used to writing C (though generally they prefer R), physicists are used to writing C, programmers are used to at least looking at C (I know a few who've never touched C).
Actually, scratch that. You mention that your target audience are statisticians. Write in R
Something like this website describes?
APL? The only problem is that few people can read it.
I've just started learning Common Lisp--and rapidly falling in love with it--and I've just moved onto the type system. I seem to be developing a particular fondness for applicative programming.
As I understand it, in CL strings and lists are both sequences, but there don't seem to be any standard functions for mapping over a sequence, only lists. I can see why they would be supplied for lists, what with them being the fundamental datatype and all, but why was it not designed to work with sequences? As they are a more general type, it would seem more useful to target applicative functions at them rather than lists. Or am I completely misunderstandimatifying how it works?
Edit:
What I was feeling particularly confused about was the way that sequences -- the abstraction -- and lists -- an implementation -- seem to be muddled up in CL. The consensus seems to be that this is for historical reasons; lisp has been around so long that you can pretty much map out the development of software engineering practices through its functions and macros; which functions apply to sequences and which to lists seems arbitrary at first glance because CL has a mixture of pre-sequence-abstraction functions that operate only on lists, and functions that do the same thing in a more general way on sequences. As someone who is just learning CL at the moment, I think it would be useful if authors introduced sequences first as the cleaner abstraction, and then bought in lists as the most fundamental implementation of that abstraction. Lists would still be needed as syntax of course, but by the time it is necessary to state this explicitly many readers would have worked this out by themselves, which would be quite an ego boost when starting out.
Why, there are a lot of functions working on sequences. Mapping over a sequence is done with MAP or MAP-INTO.
Look at the sequences section of the CLHS to find out more.
There is also a quick reference that is nicely organized.
Well, you are generally correct. Most functions do indeed focus on lists (mapcar, find, count, remove, append etc.) For a few of these there are equivalent functions for sequences (concatenate, some and every come to mind), and some, where the list-equivalent is outdated (eg. nth for lists only vs. elt for all sequences). Some functions simply work on sequences (length, for example).
CL is a bit of a mess. It's a big language, as in huge. Over 700 functions, AFAIK. And it's old. Some of these functions are deprecated by convention, and others are rarely, if ever, used.
Yes, it would be more useful to have mapping functions be methods, that applied as intended on all sequences. CL was simply not built that way. If it were to be built again today, I'm sure this would be considered, and it would look very different.
That said, you are not left completely in the cold. The loop macro works on sequences, as does iterate (a separate looping macro, which i happen to like more). This will get you far. For most practical purposes you will be using lists, and this won't be more than a pragmatic problem. If you do happen to lack a mapping function for vectors (or sequences in general), who's to stop you from writing it?