Does sbcl consider whether a function has side-effects when optimizing? - common-lisp

I have recently been reading the SBCL User Manual and started wondering about the title question.
Obviously some lisps, for example clojure, ban all side effects so they can easily parallelize the code. Common Lisp allows side effects and so I was wondering if the fact a given function is 'dirty' or 'clean' affects it's compilation.
For example in the CMUCL compiler manual let optimizations show how, in many casesm the use of 'let' to bind a new variable will be more efficient than modifying with 'setq'. I guess I'm asking if something similar is done for function calls.
I have read the relevant sections of the sbcl manual and poured through the question on stackoverflow but could not find an answer to this.

short:
Not faster. Sometimes actually slower.
long:
According to Stas Boukarev from SBCL-devel,
SBCL doesn't even know that a function has no side effects, so, no.
Besides, most of the time having side effects is the most optimal
way.
I am aware of the fact that functions such as nreverse, which are destructive, tend to be faster than nondestructive functions (in this case reverse is the nondestructive version). They also come with many setbacks. As Peter Siebel put it:
Each recycling function is a loaded gun pointed footward.

Related

Side Effects in Functional Programming

In a Functional Programming book the author mentions the following are the side effects.
Modifying a variable
Modifying a data structure in place
Setting a field on an object
Throwing an exception or halting with an error
Printing to the console or reading user input
Reading from or writing to a file
Drawing on the screen
I am just wondering how it is possible to write pure functional program without reading or writing to a file if they are side effects. If yes what would be the common approach in the functional world to achieve this ?
Thanks,
Mohamed
Properly answering this question requires likely an entire book (not too long). The point here is that functional programming is meant to separate logic description / representation from its actual runtime interpretation. Your functional code just represents (doesn't run) the effects of your program as values, giving you back some kind of abstract syntax tree that describes your computation. A different part of your code (usually called the interpreter) will take those values and lazily run the actual effects. That part is not functional.
How is it possible to write a pure functional program that is useful in any way? It is not possible. A pure functional program only heats up the CPU. It needs an impure part (the interpreter) that actually writes to disk or to the network. There are several important advantages in doing it that way. The pure functional part is easy to test (testing pure functions is easy), and the referentially transparent nature of pure functions makes easy to reason about your code locally, making the development process as a whole less buggy and more productive. It also offers elegant ways to deal with traditionally obfuscated defensive code.
So what is the common approach in the functional world to achieve side effects? As said, representing them using values, and then writing the code that interprets those values. A really good explanation of the whole process can be found in these blog post series.
For the sake of brevity, let me (over)simplify and make the long story short:
To deal with "side effects" in purely functional programming, you (programmers) write pure functions from the input to the output, and the system causes the side effects by applying those pure functions to the "real world".
For example, to read an integer x and write x+1, you (roughly speaking) write a function f(x) = x+1, and the system applies it to the real input and outputs its return value.
For another example, instead of raising an exception as a side effect, your pure function returns a special value representing the exception. Various "monads" such as IO in Haskell generalize these ideas, that is, represent side effects by pure functions (actual implementations are more complicated, of course).

What are the main differences between CLISP, ECL, and SBCL?

I want to do some simulations with ACT-R and I will need a Common Lisp implementation. I have three Common Lisp implementations available: (1) CLISP [1], (2) ECL [1], and (3) SBCL [1]. As you might have gathered from the links I have read a bit about all three of them on Wikipedia. But I would like the opinion of some experienced users. More specifically I would like to know:
(i) What are the main differences between the three implementations (e.g.: What are they best at? Is any of them used only for specific purposes and might therefore not be suited for specific tasks?)?
(ii) Is there an obvious choice either based on the fact that I will be using ACT-R or based on general reasons?
As this could be interpreted as a subjective question
I checked What topics can I ask about here and What types of questions should I avoid asking? and if I read correctly it should not qualify as forbidden fruit.
I wrote a moderately-sized application and ran it in SBCL, CCL, ECL, CLISP, ABCL, and LispWorks. For my application, SBCL is far and away the fastest, and it's got a pretty good debugger. It's a bit strict about some warnings--you may end up coding in a slightly more regimented way, or turn off one or more warnings.
I agree with Sylwester: If possible, write to the standard, and then you can run your code in any implementation. You'll figure out through testing which is best for your project.
Since SBCL compiles so agressively, once in a while the stacktrace in the debugger is less informative than I'd like. This can probably be controlled with parameters, but I just rerun the same code in one of the other implementations. ABCL has an informative stacktrace, for example, as I recall. (It's also very slow, but if you want real Common Lisp and Java interoperability, it's the only option.)
One of the nice things about Common Lisp is how many high-quality implementations there are, most of them free.
For informal use--e.g. to learn Common Lisp, CCL or CLISP may be a better choice than SBCL.
I have never tried compiling to C using ECL. It's possible that it would beat SBCL on speed for some applications. I have no idea.
CLISP and LispWorks will not handle arbitrarily long argument lists (unless that's been fixed in the last couple of years, but I doubt it). This turned out to be a problem with my application, but would not be a problem for most code.
Doesn't ACT-R come out of Carnegie Mellon? What do its authors use? My guess would be CMUCL or SBCL, which is derived from CMUCL. (I only tried CMUCL briefly. Its interpreter is very slow, but I assume that compiled code is very fast. I think that most people choose SBCL over CMUCL, however.)
(It's possible that this question belongs on Programmers.SE.)
In general, SBCL is the default choice among open-source Lisps. It is solid, well-supported, produces fast code, and provides many goodies beyond what the standard mandates (concurrency primitives, profiling, etc.) Another implementation with similar properties is CCL.
CLISP is more suitable if you're not an engineer, or you want to quickly show Lisp to someone non-engineer. It's a pretty basic implementation, but quick to get running and user-friendly. A Lisp-calculator :)
ECL's major selling point is that it's embeddable, i.e. it is rather easy to make it work inside some C application, like a web-server etc. It's a good choice for geeks, who want to explore solutions on the boundary of Lisp and the outside world. If you're not intersted in such use case I wouldn't recommend you to try it, especially since it is not actively supported, at the moment.
Their names, their bugs and their non standard additions (using them will lock you in)
I use CLISP as REPL and testing during dev and usually SBCL for production. ECL i've never used.
I recommend you test your code with more than one implementation.

2 questions at the end of a functional programming course

Here seems to be the two biggest things I can take from the How to Design Programs (simplified Racket) course I just finished, straight from the lecture notes of the course:
1) Tail call optimization, and the lack thereof in non-functional languages:
Sadly, most other languages do not support TAIL CALL
OPTIMIZATION. Put another way, they do build up a stack
even for tail calls.
Tail call optimization was invented in the mid 70s, long
after the main elements of most languages were developed.
Because they do not have tail call optimization, these
languages provide a fixed set of LOOPING CONSTRUCTS that
make it possible to traverse arbitrary sized data.
a) What are the equivalents to this type of optimization in procedural languages that don't feature it?
b) Do using those equivalents mean we avoid building up a stack in similar situations in languages that don't have it?
2) Mutation and multicore processors
This mechanism is fundamental in almost any other language you
program in. We have delayed introducing it until now for
several reasons:
despite being fundamental, it is surprisingly complex
overuse of it leads to programs that are not amenable
to parallelization (running on multiple processors).
Since multi-core computers are now common, the ability
to use mutation only when needed is becoming more and
more important
overuse of mutation can also make it difficult to
understand programs, and difficult to test them well
But mutable variables are important, and learning this mechanism
will give you more preparation to work with Java, Python and many
other languages. Even in such languages, you want to use a style
called "mostly functional programming".
I learned some Java, Python and C++ before taking this course, so came to take mutation for granted. Now that has been all thrown in the air by the above statement. My questions are:
a) where could I find more detailed information regarding what is suggested in the 2nd bullet, and what to do about it, and
b) what kind of patterns would emerge from a "mostly functional programming" style, as opposed to a more careless style I probably would have had had I continued on with those other languages instead of taking this course?
As Leppie points out, looping constructs manage to recover the space savings of proper tail calling, for the particular kinds of loops that they support. The only problem with looping constructs is that the ones you have are never enough, unless you just hurl the ball into the user's court and force them to model the stack explicitly.
To take an example, suppose you're traversing a binary tree using a loop. It works... but you need to explicitly keep track of the "ones to come back to." A recursive traversal in a tail-calling language allows you to have your cake and eat it too, by not wasting space when not required, and not forcing you to keep track of the stack yourself.
Your question on parallelism and concurrency is much more wide-open, and the best pointers are probably to areas of research, rather than existing solutions. I think that most would agree that there's a crisis going on in the computing world; how do we adapt our mutation-heavy programming skills to the new multi-core world?
Simply switching to a functional paradigm isn't a silver bullet here, either; we still don't know how to write high-level code and generate blazing fast non-mutating run-concurrently code. Lots of folks are working on this, though!
To expand on the "mutability makes parallelism hard" concept, when you have multiple cores going, you have to use synchronisation if you want to modify something from one core and have it be seen consistently by all the other cores.
Getting synchronisation right is hard. If you over-synchronise, you have deadlocks, slow (serial rather than parallel) performance, etc. If you under-synchronise, you have partially-observed changes (where another core sees only a portion of the changes you made from a different core), leaving your objects observed in an invalid "halfway changed" state.
It is for that reason that many functional programming languages encourage a message-queue concept instead of a shared state concept. In that case, the only shared state is the message queue, and managing synchronisation in a message queue is a solved problem.
a) What are the equivalents to this type of optimization in procedural languages that don't feature it? b) Do using those equivalents mean we avoid building up a stack in similar situations in languages that don't have it?
Well, the significance of a tail call is that it can evaluate another function without adding to the call stack, so anything that builds up the stack can't really be called an equivalent.
A tail call behaves essentially like a jump to the new code, using the language trappings of a function call and all the appropriate detail management. So in languages without this optimization, you'd use a jump within a single function. Loops, conditional blocks, or even arbitrary goto statements if nothing else works.
a) where could I find more detailed information regarding what is suggested in the 2nd bullet, and what to do about it
The second bullet sounds like an oversimplification. There are many ways to make parallelization more difficult than it needs to be, and overuse of mutation is just one.
However, note that parallelization (splitting a task into pieces that can be done simultaneously) is not entirely the same thing as concurrency (having multiple tasks executed simultaneously that may interact), though there's certainly overlap. Avoiding mutation is incredibly helpful in writing concurrent programs, since immutable data avoids a lot of race conditions and resource contention that would otherwise be possible.
b) what kind of patterns would emerge from a "mostly functional programming" style, as opposed to a more careless style I probably would have had had I continued on with those other languages instead of taking this course?
Have you looked at Haskell or Clojure? Both are heavily inclined to a very functional style emphasizing controlled mutation. Haskell is more rigorous about it but has a lot of tools for working with limited forms of mutability, while Clojure is a bit more informal and might be more familiar to you since it's another Lisp dialect.

How do I use Declarations (type, inline, optimize) in Scheme?

How do I declare the types of the parameters in order to circumvent type checking?
How do I optimize the speed to tell the compiler to run the function as fast as possible like (optimize speed (safety 0))?
How do I make an inline function in Scheme?
How do I use an unboxed representation of a data object?
And finally are any of these important or necessary? Can I depend on my compiler to make these optimizations?
thanks,
kunjaan.
You can't do any of these in any portable way.
You can get a "sort of" inlining using macros, but it's almost always to try to do that. People who write Scheme (or any other language) compilers are usually much better than you in deciding when it is best to inline a function.
You can't make values unboxed; some Scheme compilers will do that as an optimization, but not in any way that is visible (because it is an optimization -- so it should preserve the semantics).
As for your last question, an answer is very subjective. Some people cannot sleep at night without knowing exactly how many CPU cycles some function uses. Some people don't care and are fine with trusting the compiler to optimize things reasonably well. At least at the stages where you're more of a student of the language and less of an implementor, it is better to stick to the latter group.
If you want to help out the compiler, consider reducing top level definitions where possible.
If the compiler sees a function at top-level, it's very hard for it to guess how that function might be used or modified by the program.
If a function is defined within the scope of a function that uses it, the compiler's job becomes much simpler.
There is a section about this in the Chez Scheme manual:
http://www.scheme.com/csug7/use.html#./use:h4
Apparently Chez is one of the fastest Scheme implementations there is. If it needs this sort of "guidance" to make good optimizations, I suspect other implementations can't live without it either (or they just ignore it all together).

Advantages of stateless programming?

I've recently been learning about functional programming (specifically Haskell, but I've gone through tutorials on Lisp and Erlang as well). While I found the concepts very enlightening, I still don't see the practical side of the "no side effects" concept. What are the practical advantages of it? I'm trying to think in the functional mindset, but there are some situations that just seem overly complex without the ability to save state in an easy way (I don't consider Haskell's monads 'easy').
Is it worth continuing to learn Haskell (or another purely functional language) in-depth? Is functional or stateless programming actually more productive than procedural? Is it likely that I will continue to use Haskell or another functional language later, or should I learn it only for the understanding?
I care less about performance than productivity. So I'm mainly asking if I will be more productive in a functional language than a procedural/object-oriented/whatever.
Read Functional Programming in a Nutshell.
There are lots of advantages to stateless programming, not least of which is dramatically multithreaded and concurrent code. To put it bluntly, mutable state is enemy of multithreaded code. If values are immutable by default, programmers don't need to worry about one thread mutating the value of shared state between two threads, so it eliminates a whole class of multithreading bugs related to race conditions. Since there are no race conditions, there's no reason to use locks either, so immutability eliminates another whole class of bugs related to deadlocks as well.
That's the big reason why functional programming matters, and probably the best one for jumping on the functional programming train. There are also lots of other benefits, including simplified debugging (i.e. functions are pure and do not mutate state in other parts of an application), more terse and expressive code, less boilerplate code compared to languages which are heavily dependent on design patterns, and the compiler can more aggressively optimize your code.
The more pieces of your program are stateless, the more ways there are to put pieces together without having anything break. The power of the stateless paradigm lies not in statelessness (or purity) per se, but the ability it gives you to write powerful, reusable functions and combine them.
You can find a good tutorial with lots of examples in John Hughes's paper Why Functional Programming Matters (PDF).
You will be gobs more productive, especially if you pick a functional language that also has algebraic data types and pattern matching (Caml, SML, Haskell).
Many of the other answers have focused on the performance (parallelism) side of functional programming, which I believe is very important. However, you did specifically ask about productivity, as in, can you program the same thing faster in a functional paradigm than in an imperative paradigm.
I actually find (from personal experience) that programming in F# matches the way I think better, and so it's easier. I think that's the biggest difference. I've programmed in both F# and C#, and there's a lot less "fighting the language" in F#, which I love. You don't have to think about the details in F#. Here's a few examples of what I've found I really enjoy.
For example, even though F# is statically typed (all types are resolved at compile time), the type inference figures out what types you have, so you don't have to say it. And if it can't figure it out, it automatically makes your function/class/whatever generic. So you never have to write any generic whatever, it's all automatic. I find that means I'm spending more time thinking about the problem and less how to implement it. In fact, whenever I come back to C#, I find I really miss this type inference, you never realise how distracting it is until you don't need to do it anymore.
Also in F#, instead of writing loops, you call functions. It's a subtle change, but significant, because you don't have to think about the loop construct anymore. For example, here's a piece of code which would go through and match something (I can't remember what, it's from a project Euler puzzle):
let matchingFactors =
factors
|> Seq.filter (fun x -> largestPalindrome % x = 0)
|> Seq.map (fun x -> (x, largestPalindrome / x))
I realise that doing a filter then a map (that's a conversion of each element) in C# would be quite simple, but you have to think at a lower level. Particularly, you'd have to write the loop itself, and have your own explicit if statement, and those kinds of things. Since learning F#, I've realised I've found it easier to code in the functional way, where if you want to filter, you write "filter", and if you want to map, you write "map", instead of implementing each of the details.
I also love the |> operator, which I think separates F# from ocaml, and possibly other functional languages. It's the pipe operator, it lets you "pipe" the output of one expression into the input of another expression. It makes the code follow how I think more. Like in the code snippet above, that's saying, "take the factors sequence, filter it, then map it." It's a very high level of thinking, which you don't get in an imperative programming language because you're so busy writing the loop and if statements. It's the one thing I miss the most whenever I go into another language.
So just in general, even though I can program in both C# and F#, I find it easier to use F# because you can think at a higher level. I would argue that because the smaller details are removed from functional programming (in F# at least), that I am more productive.
Edit: I saw in one of the comments that you asked for an example of "state" in a functional programming language. F# can be written imperatively, so here's a direct example of how you can have mutable state in F#:
let mutable x = 5
for i in 1..10 do
x <- x + i
Consider all the difficult bugs you've spent a long time debugging.
Now, how many of those bugs were due to "unintended interactions" between two separate components of a program? (Nearly all threading bugs have this form: races involving writing shared data, deadlocks, ... Additionally, it is common to find libraries that have some unexpected effect on global state, or read/write the registry/environment, etc.) I would posit that at least 1 in 3 'hard bugs' fall into this category.
Now if you switch to stateless/immutable/pure programming, all those bugs go away. You are presented with some new challenges instead (e.g. when you do want different modules to interact with the environment), but in a language like Haskell, those interactions get explicitly reified into the type system, which means you can just look at the type of a function and reason about the type of interactions it can have with the rest of the program.
That's the big win from 'immutability' IMO. In an ideal world, we'd all design terrific APIs and even when things were mutable, effects would be local and well-documented and 'unexpected' interactions would be kept to a minimum. In the real world, there are lots of APIs that interact with global state in myriad ways, and these are the source of the most pernicious bugs. Aspiring to statelessness is aspiring to be rid of unintended/implicit/behind-the-scenes interactions among components.
One advantage of stateless functions is that they permit precalculation or caching of the function's return values. Even some C compilers allow you to explicitly mark functions as stateless to improve their optimisability. As many others have noted, stateless functions are much easier to parallelise.
But efficiency is not the only concern. A pure function is easier to test and debug since anything that affects it is explicitly stated. And when programming in a functional language, one gets in the habit of making as few functions "dirty" (with I/O, etc.) as possible. Separating out the stateful stuff this way is a good way to design programs, even in not-so-functional languages.
Functional languages can take a while to "get", and it's difficult to explain to someone who hasn't gone through that process. But most people who persist long enough finally realise that the fuss is worth it, even if they don't end up using functional languages much.
Without state, it is very easy to automatically parallelize your code (as CPUs are made with more and more cores this is very important).
Stateless web applications are essential when you start having higher traffic.
There could be plenty of user data that you don't want to store on the client side for security reasons for example. In this case you need to store it server-side. You could use the web applications default session but if you have more than one instance of the application you will need to make sure that each user is always directed to the same instance.
Load balancers often have the ability to have 'sticky sessions' where the load balancer some how knows which server to send the users request to. This is not ideal though, for example it means every time you restart your web application, all connected users will lose their session.
A better approach is to store the session behind the web servers in some sort of data store, these days there are loads of great nosql products available for this (redis, mongo, elasticsearch, memcached). This way the web servers are stateless but you still have state server-side and the availability of this state can be managed by choosing the right datastore setup. These data stores usually have great redundancy so it should almost always be possible to make changes to your web application and even the data store without impacting the users.
My understanding is that FP also has a huge impact on testing. Not having a mutable state will often force you to supply more data to a function than you would have to for a class. There's tradeoffs, but think about how easy it would be to test a function that is "incrementNumberByN" rather than a "Counter" class.
Object
describe("counter", () => {
it("should increment the count by one when 'increment' invoked without
argument", () => {
const counter = new Counter(0)
counter.increment()
expect(counter.count).toBe(1)
})
it("should increment the count by n when 'increment' invoked with
argument", () => {
const counter = new Counter(0)
counter.increment(2)
expect(counter.count).toBe(2)
})
})
functional
describe("incrementNumberBy(startingNumber, increment)", () => {
it("should increment by 1 if n not supplied"){
expect(incrementNumberBy(0)).toBe(1)
}
it("should increment by 1 if n = 1 supplied"){
expect(countBy(0, 1)).toBe(1)
}
})
Since the function has no state and the data going in is more explicit, there are fewer things to focus on when you are trying to figure out why a test might be failing. On the tests for the counter we had to do
const counter = new Counter(0)
counter.increment()
expect(counter.count).toBe(1)
Both of the first two lines contribute to the value of counter.count. In a simple example like this 1 vs 2 lines of potentially problematic code isn't a big deal, but when you deal with a more complex object you might be adding a ton of complexity to your testing as well.
In contrast, when you write a project in a functional language, it nudges you towards keeping fancy algorithms dependent on the data flowing in and out of a particular function, rather than being dependent on the state of your system.
Another way of looking at it would be illustrating the mindset for testing a system in each paradigm.
For Functional Programming: Make sure function A works for given inputs, you make sure function B works with given inputs, make sure C works with given inputs.
For OOP: Make sure Object A's method works given an input argument of X after doing Y and Z to the state of the object. Make sure Object B's method works given an input argument of X after doing W and Y to the state of the object.
The advantages of stateless programming coincide with those goto-free programming, only more so.
Though many descriptions of functional programming emphasize the lack of mutation, the lack of mutation also goes hand in hand with the lack of unconditional control transfers, such as loops. In functional programming languages, recursion, in particularly tail recursion, replaces looping. Recursion eliminates both the unconditional control construct and the mutation of variables in the same stroke. The recursive call binds argument values to parameters, rather than assigning values.
To understand why this is advantageous, rather than turning to functional programming literature, we can consult the 1968 paper by Dijkstra, "Go To Statement Considered Harmful":
"The unbridled use of the go to statement has an immediate consequence that it becomes terribly hard to find a meaningful set of coordinates in which to describe the process progress."
Dijkstra's observations, however still apply to structured programs which avoid go to, because statements like while, if and whatnot are just window dressing on go to! Without using go to, we can still find it impossible to find the coordinates in which to describe the process progress. Dijkstra neglected to observe that bridled go to still has all the same issues.
What this means is that at any given point in the execution of the program, it is not clear how we got there. When we run into a bug, we have to use backwards reasoning: how did we end up in this state? How did we branch into this point of the code? Often it is hard to follow: the trail goes back a few steps and then runs cold due to a vastness of possibilities.
Functional programming gives us the absolute coordinates. We can rely on analytical tools like mathematical induction to understand how the program arrived into a certain situation.
For example, to convince ourselves that a recursive function is correct, we can just verify its base cases, and then understand and check its inductive hypothesis.
If the logic is written as a loop with mutating variables, we need a more complicated set of tools: breaking down the logic into steps with pre- and post-conditions, which we rewrite in terms mathematics that refers to the prior and current values of variables and such. Yes, if the program uses only certain control structures, avoiding go to, then the analysis is somewhat easier. The tools are tailored to the structures: we have a recipe for how we analyze the correctness of an if, while, and other structures.
However, by contrast, in a functional program there is no prior value of any variable to reason about; that whole class of problem has gone away.
Haskel and Prolog are good examples of languages which may be implemented as stateless programming languages. But unfortunately they are not so far. Both Prolog and Haskel have imperative implementations currently. See some SMT's, seem closer to stateless coding.
This is why you are having hard time seeing any benefits from these programing languages. Due to imperative implementations we have no performance and stability benefits. So the lack of stateless languages infrastructure is the main reason you feel no any stateless programming language due to its absence.
These are some benefits of pure stateless:
Task description is the program (compact code)
Stability due to absense of state-dependant bugs (the most of bugs)
Cachable results (a set of inputs always cause same set of outputs)
Distributable computations
Rebaseable to quantum computations
Thin code for multiple overlapping clauses
Allows differentiable programming optimizations
Consistently applying code changes (adding logic breaks nothing written)
Optimized combinatorics (no need to bruteforce enumerations)
Stateless coding is about concentrating on relations between data which then used for computing by deducing it. Basically this is the next level of programming abstraction. It is much closer to native language then any imperative programming languages because it allow describing relations instead of state change sequences.

Resources