Now that Chez Scheme is open-source, I wonder how it compares to Racket and other Schemes or languages in terms of performance, so that one could make informed choices about using them in one's projects.
Unfortunately, I couldn't find any relevant benchmarks.
I found the following:
https://ecraven.github.io/r7rs-benchmarks/benchmark.html
Problem: no Racket, or other languages (Update 10/13/18: Chez is now included in some of the benchmarks)
http://www.larcenists.org/benchmarksGenuineR6Linux.html
Problem: no Chez Scheme, or other languages
https://benchmarksgame-team.pages.debian.net/benchmarksgame/
Problem: only Racket, questionable comparisions (For example, Python is not allowed to use Numpy where it would clearly help, while Racket is making FFI calls to GMP)
So, none of the benchmarks I found allow you to compare Racket to Chez, for example, or Chez to SBCL, or Java. Are there Chez benchmarks that give you a sense of how fast it is?
Chez Scheme is often said to be the fastest Scheme/Lisp around. We should know if it's faster than, say, Java for your typical business logic application.
Kent Dybvig has written articles on the implementation Chez Scheme.
They'll often have comparisons with other implementations:
https://www.cs.indiana.edu/~dyb/
It's anecdotal, but Matthew Flatt, the lead developer of Racket, thinks Chez is pretty good. You can read more about it here. He cites a regular expression matcher in which Chez is twice as fast as Racket and comparable to C.
Related
Emacs lisp is a dialect of LISP and especially Scheme. Most of scheme interpreters do have a optimization of Tail Recursion, but emacs lisp doens't. I searched the reason in `info elisp' for a while, but I fail to find it.
P.S. Yes, there is other iteration syntax in elisp like `while', but I still cannot find a good reason why they didn't implement tail recursion like other scheme interpreters.
Emacs Lisp was created in the 1980's. The Lisp dialect that the Emacs author (Richard Stallman) was most familiar with at the time was MIT Maclisp, and Emacs Lisp is very similar to it. It used dynamic variable scoping and did not have lexical closures or tail recursion optimization, and Emacs Lisp is the same.
Emacs Lisp is very different from Scheme. The biggest difference is that Scheme is lexically scoped, but this was only recently added to Emacs Lisp, and it has to be enabled on a per-file basis (by putting ;;; -*- lexical-binding: t -*- on the first line) because doing it by default would cause many incompatibilities.
There has been some work to replace Emacs Lisp with Guile, a Scheme dialect. But it's not clear whether it will ever reach fruition.
Emacs Lisp has had dynamic scoping as its main/only scoping rule for the first 25 years of its life. Dynamic scoping is basically incompatible with optimization of tail-recursion, so until Emacs-24 (which introduced lexical scoping) there was very little to no interest in this optimization.
Nowadays, ELisp could benefit sometimes from optimization of tail recursion, and there's been some patches submitted to do that, but that hasn't yet been integrated. The lack of tail-recursion optimization as well as the relatively inefficient implementation of function calls has influenced ELisp style, such that recursion is not used very often, which in turns reduces the benefits of adding the optimization of tail calls.
Looks like someone has made an implementation of TCO in Emacs Lisp: https://github.com/Wilfred/tco.el. I haven't played with it myself, but you might want to give it a whirl if you're interested in seeing TCO in Emacs Lisp.
Emacs 28 introduced the macro named-let, which can be used to evaluate a tail-recursive loop expression in an optimized way.
While there's no direct support for auto-optimizing functions yet, we can use the above macro inside those functions; or if you are adventurous, set native-comp-speed to 3 (also Emacs 28+): Self TCO by GCCEmacs
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Which FP language follows lambda calculus the closest in terms of its code looking, feeling, acting like lambda calculus abstractions?
This might not be a real answer, it's more of a guess about what you actually want.
In general, there's very little in the lambda calculus -- you basically need (first-class) functions, function applications, and variables. These days you'll have hard time finding a language that does not provide you with these things... However, things can get confusing when you're trying to learn about it -- for example, it's very easy to just use plain numbers and then get them mixed up with church numerals. (I've seen this happen with many students, adapting to the kind of formal thinking that you need for this material is hard enough that throwing encodings onto the pile doesn't really help...)
As Don said, Scheme is very close to the "plain" untyped lambda calculus, and it's probably very fitting in your case if you're going through The Little Schemer. If you really want to use a "proper" LC, you need to make sure that you use only functions (problems as above); but there are some additional problems that you'll run into, especially when you read various other texts on the subject. First, most texts will use lazy evaluation which you don't get in Scheme. Second, since LC has only unary functions, it's very common to shorten terms and use, for example, λxyz.zxy instead of the "real" form which in this case is λx.(λy.(λz.((z x) y))) or (lambda (x) (lambda (y) (lambda (z) ((z x) y)))) in Scheme. (This is called Currying.)
So yes, Scheme is pretty close to LC, but that's not saying much with all of these issues. Haskell is arguably a better candidate since it's both lazy, and does that kind of currying for multiple arguments to functions. OTOH, you're dealing with a typed language which is a pretty big piece of baggage to bring into this game -- and you'll get in some serious mud if you try to do TLS-style examples...
If you do want to get everything (lazy, shorthands, untyped, close enough to Scheme), then Racket has another point to consider. At a high-level, it's very close to Scheme, but it goes much farther in that you can quickly slap up a language that is a restriction of the Racket language to just lambda expressions and function applications. With some more work, you can also get it to do currying and you can even make it lazy. That's not really an exercise that you should try doing yourself at this point -- but if it sounds like what you want, then I can point you to my course (look for "Schlac" in the class notes) where we use a language that is doing all of the above, and it's extremely restricted so you get nothing more than the basic LC constructs. (For example, 3 is an unbound identifier until you define it.) Note that this is not some interpreter -- it's compiled into Racket code which means that it runs fast enough that you can even write code that uses numbers. You can get the implementation for that language there too, and once you install that, you get this language if you start files with #lang pl schlac.
Lambda calculus is a very, very restricted programming model. You have only functions. No literals, no built in arithmetic operators, no data structures. Everything is encoded as functions. As such, most functional languages try to extend the lambda calculus in ways to make it more convenient for everyday programming.
Haskell uses a modern extension of lambda calculus as its core language: System F, extended with data types. (GHC has since extended this further to System Fc, supporting type equality coercions).
As all Haskell can be written directly in its core language, and its core language is an extension of typed lambda calculus (specifically, second-order lambda calculus), it could be said that Haskell follows lambda calculus closely, modulo its builtin operators for concurrency; parallelism; and memory side effects (and the FFI). This makes development of new compiler optimizations significantly easier, and also makes the semantics of a given program more tractable to understand.
On the other hand, Scheme is a variant of the untyped lambda calculus, extended with side effects and other non-lambda calculus concepts (such as concurrency primitives). It can be said to closely follow the untyped lambda calculus.
The only people that this matters to are: people learning the lambda calculus; and compiler writers.
Can anybody give a clear explanation? What is a wholemeal programming in functional programming area. All I've found is that wholemeal is a
focusing on entire data structures rather than their elements
but how can it be archived?
(Code examples in such languages as Scala or Ocaml are very desirable.)
"Functional languages excel at wholemeal programming, a term coined by
Geraint Jones. Wholemeal programming means to think big: work with an
entire list, rather than a sequence of elements; develop a solution
space, rather than an individual solution; imagine a graph, rather
than a single path. The wholemeal approach often offers new insights
or provides new perspectives on a given problem. It is nicely
complemented by the idea of projective programming: first solve a more
general problem, then extract the interesting bits and pieces by
transforming the general program into more specialised ones."
I also found this
it helps prevent a disease called "indexitis" and encourages lawful
program construction (from "Pearls of Functional Algorithm Design",
Richard Bird, 2010)
See also (http://www.comlab.ox.ac.uk/ralf.hinze/publications/ICFP09.pdf)
I always found the Hutton/Bird Sudoku solver a good example of wholemeal programming: http://www.cs.nott.ac.uk/~gmh/sudoku.lhs
A fair number of functional pearls (both that in Bird's excellent book that Code Monkey cites and those available here: http://www.haskell.org/haskellwiki/Research_papers/Functional_pearls) will probably also be instructive.
I have a number of small algorithms that I would like to write up in a paper. They are relatively short, and concise. However, instead of writing them in pseudo-code (à la Cormen or even Knuth), I would like to write an algebraic representation of them (more linear and better LaTeX rendering) . However, I cannot find resources as to the best notation for this, if there is anything: e.g. how do I represent a loop? If? The addition of a tuple to a list?
Has any of you encountered this problem, and somehow solved it?
Thanks.
EDIT: Thanks, people. I think I did a poor job at phrasing the question. Here goes again, hoping I make it clearer: what is the common notation for talking about loops and if-then clauses in a mathematical notation? For instance, I can use $acc \leftarrow acc \cup \langle i,i+1 \rangle$ to represent the "add" method of a list.
Don't do this. You are deviating from what people expect to see when they read a paper about algorithms. You should follow expected practices; your ideas are more likely to get the attention that they deserve. When in Rome, do as the Romans do.
Formatting code (or pseudocode as it may be) in a LaTeXed paper is very easy. See, for example, Formatting code in LaTeX.
I see if-expressions in mathematical notation fairly often. The usual thing for a loop is a recurrence relation, or equivalently, a function defined recursively.
Here's how the Ackermann function is defined on Wikipedia, for instance:
This picture is nice because it feels mathematical in flavor and yet you could clearly type it in almost exactly as written and have an implementation. It is not always possible to achieve that.
Other mathematical notations that correspond to loops include ∑-notation for summation and set-builder notation.
I hope this answers your question! But if your aim is to describe how something is done and have someone understand, I think it is probably a mistake to assume that mathematicians would prefer to see equations. I don't think they're interchangeable tools (despite Turing equivalence). If your algorithm involves mutable data structures, procedural code is probably going to be better than equations for explaining it.
I'd copy Knuth. Few know how to communicate better than him in a computer science setting.
A symbol for general loops does not exist; usually you will use the summation operator. "if" is represented using implications, and to "add a tuple to a list" you would use union.
However, in general, a bit of verbosity is not necessarily a bad thing - sometimes, especially for complex algorithms, it is best to spell it out in plain English, using examples and diagrams. This is doubly-true for non-coders.
Think about it: when you read a math text-book on Euclid's algorithm for GCD, or the sieve of Eratosthenes, how is it written? Usually, the algorithm itself is in prose, while the proof of the algorithm is where the mathematical symbols lie.
You might take a look at Haskell. Haskell formats well in latex, has a nice algebraic syntax, and you can even compile a latex file with Haskell in it, provided the code is wrapped in \begin{code} and \end{code}. See here: http://www.haskell.org/haskellwiki/Literate_programming. There are probably literate programming tools for other languages.
Lisp started out as a mathematical notation of a computing model so that the lecturer would have a better tool than turing machines. By accident, it turns out that it can be implemented in assembly - thus lisp, the programming language was born.
But I don't think this is really what you are looking for since the computing model that lisp describes doesn't have loops: recursion is used instead. The syntax derives from algebra where braces denote evaluate-this-and-substitute-the-result. Indeed, lisp's model of computing is basically substitution - what algebra essentially is.
Indeed, most functional languages like Lisp, Haskell and Erlang are derived from mathematics. Haskell is actually a result of proving that lambda calculus can be used to implement type systems. So Haskell, like Lisp was born out of pure mathematics. But again, the syntax is not what you would probably be used to.
You can certainly explain Lisp and Haskell syntax to mathematicians and they would treat it as a "game". Language constructs like loops, recursion and conditionals can be proven out of the rules of the game rather than blindly implemented like in other languages. This would lead you into the realms of combinatronics, another branch of mathematics. Indeed, in combinatronics, even the concept of numbers can be constructed out of the rules of the game rather than being a native part of the language (google Church Numerals).
So have a look at Lisp/Scheme, Erlang and Haskell if you want. Erlang especially has syntax close to what you want:
add(a,b) -> a + b
But my recommendation is to write in C-like pseudocode. It's sort of the lowest common denominator in programming languages. Has a syntax that is fairly easy to understand and clean. And the function syntax even derives from functions in mathematics. Remember f(x)?
As a plus, mathematicians are used to writing C, statisticians are used to writing C (though generally they prefer R), physicists are used to writing C, programmers are used to at least looking at C (I know a few who've never touched C).
Actually, scratch that. You mention that your target audience are statisticians. Write in R
Something like this website describes?
APL? The only problem is that few people can read it.
I've just started learning Common Lisp--and rapidly falling in love with it--and I've just moved onto the type system. I seem to be developing a particular fondness for applicative programming.
As I understand it, in CL strings and lists are both sequences, but there don't seem to be any standard functions for mapping over a sequence, only lists. I can see why they would be supplied for lists, what with them being the fundamental datatype and all, but why was it not designed to work with sequences? As they are a more general type, it would seem more useful to target applicative functions at them rather than lists. Or am I completely misunderstandimatifying how it works?
Edit:
What I was feeling particularly confused about was the way that sequences -- the abstraction -- and lists -- an implementation -- seem to be muddled up in CL. The consensus seems to be that this is for historical reasons; lisp has been around so long that you can pretty much map out the development of software engineering practices through its functions and macros; which functions apply to sequences and which to lists seems arbitrary at first glance because CL has a mixture of pre-sequence-abstraction functions that operate only on lists, and functions that do the same thing in a more general way on sequences. As someone who is just learning CL at the moment, I think it would be useful if authors introduced sequences first as the cleaner abstraction, and then bought in lists as the most fundamental implementation of that abstraction. Lists would still be needed as syntax of course, but by the time it is necessary to state this explicitly many readers would have worked this out by themselves, which would be quite an ego boost when starting out.
Why, there are a lot of functions working on sequences. Mapping over a sequence is done with MAP or MAP-INTO.
Look at the sequences section of the CLHS to find out more.
There is also a quick reference that is nicely organized.
Well, you are generally correct. Most functions do indeed focus on lists (mapcar, find, count, remove, append etc.) For a few of these there are equivalent functions for sequences (concatenate, some and every come to mind), and some, where the list-equivalent is outdated (eg. nth for lists only vs. elt for all sequences). Some functions simply work on sequences (length, for example).
CL is a bit of a mess. It's a big language, as in huge. Over 700 functions, AFAIK. And it's old. Some of these functions are deprecated by convention, and others are rarely, if ever, used.
Yes, it would be more useful to have mapping functions be methods, that applied as intended on all sequences. CL was simply not built that way. If it were to be built again today, I'm sure this would be considered, and it would look very different.
That said, you are not left completely in the cold. The loop macro works on sequences, as does iterate (a separate looping macro, which i happen to like more). This will get you far. For most practical purposes you will be using lists, and this won't be more than a pragmatic problem. If you do happen to lack a mapping function for vectors (or sequences in general), who's to stop you from writing it?