Or even heavily functional styles in non functional/non memory managed languages.
What sort of techniques are there to deal with problems like intermediate garbage? Cleaning up after lazynizess/thunk allocated memory. Performance(since you can't easily share resources between immutable variables if you have to track its progress to deallocate it(smart pointers?)
You might be interested in programming languages with linear or uniqueness types, these can manage resources (and memory in particular). Recent examples: ATS and LinearML.
There have been attempts at "region-based memory management" (e.g. Cyclone), but they haven't lifted off just yet -- regions also allow for (earlier) memory reclamation, but they aren't enough (e.g., there are programs which, when run with region-based memory management, will exhibit unacceptable performance). The two schemes could be mixed, I think.
Back to your question, some ATS programs can run without garbage collection. (I won't say that such programs are written in "functional" style, such as in SML, but in a mix of imperative and first-order functional style.)
The only relevant thing I can think of is how Mlton is eliminating a significant part of garbage collection with a region analysis. It should be possible, in theory, to implement a compiler which will treat an unmanageable and un-annotated pointer leak as an error, and then one would be able to use many functional programming techniques in an entirely manual memory management setting.
Related
Clarifying the question a bit:
Garbage collectors such as those used by the JVM involve a lot of complexity as a result of the nature of the languages they support. What simplifications would be afforded to a garbage collector purpose-built for a pure, functional, eagerly-evaluated programming language compared to say, the JVM garbage collector?
I'm barely an expert in functional languages design but when thinking about your question, immediately the following topics come to my mind:
most probably it would be generational GC, at least I see no reason why it should not be. It could probably benefit from tuning to a large number of temporary objects
no write-barriers - due to immutability it is not possible to create a reference from older object to newer one. No older-to-younger references mean no need for the remembered sets in case of generational GC, thus no write-barriers are necessary to manage them. This is a great simplification in my humble opinion.
easier safe-points - due to the functional languages nature, function calls are much denser than in object-oriented programming. Even loops may be defined as recursive function calls. This should make implementing GC safe-points easier - for example, simply on each function entry. For example, read this article as a reference.
no pinning - if our hypothetical, pure functional language does not support native code cooperation, object pinning will not be necessary in case of compacting GC. This can greatly simplify its design.
no finalization - object finalization probably would not fit into purely functional language. I feel it breaks referential transparency. And if we do not support native resources, one will not need it at the first place.
I have been reading up on functional programming and I have two questions that I was hoping someone could help me with.
I've read that lazy functional programs can be inefficient if you are accessing the same data often because of the extra overhead of checking whether the expression has been evaluated. I have also read in the first answer of the following thread (Are functional programming languages suitable for graphics programming?), that functional programming can be resource demanding in the context of graphical programming because it creates a lot of temporary objects (I assume this has to do with having to create new objects to simulate state?).
Are there any other areas where functional programming might end up being resource heavy / inefficient in compairison to OOP/procedural programming?
I have read in the first answer in the following thread (Pitfalls/Disadvantages of Functional Programming), that "it is very difficult to predict the time and space costs of evaluating a lazy functional program". Could someone give a simple (if that exists) explaination of why this is the case? I assume it has to do with lazy evaluation only evaluation expressions when needed, but why is it not simple to predict kind of a worst case scenario that is similar to imperative programming where everything is evaluated?
I've read that lazy functional programs can be inefficient if you are accessing the same data often because of the extra overhead of checking whether the expression has been evaluated.
This involves checking a tag bit on a pointer. It is cheap.
functional programming can be resource demanding in the context of graphical programming because it creates a lot of temporary objects
This depends on the implementation. Allocation in pure FP languages is cheap, as immutability means you can avoid some write barriers. Object allocation is roughly similar to OO languages, though some GCs, such as GHCs, are very efficient compared to e.g. Java.
Are there any other areas where functional programming might end up being resource heavy / inefficient in compairison to OOP/procedural programming?
There are plenty of problems that require very tight resource usage. E.g. operating systems. In such environments you need libraries for direct access to hardware and the ability to mutate memory in place. Depending on the functional language implementation you're using, you may or may not have this.
it is very difficult to predict the time and space costs of evaluating a lazy functional program
It is harder to model lazy evaluation costs because the amount of work done, and when it is done, depends on the input data, which is only available at runtime.
Practically, languages let you choose whether you want to use strict or lazy evaluation, as neither are appropriate for all situations.
Here seems to be the two biggest things I can take from the How to Design Programs (simplified Racket) course I just finished, straight from the lecture notes of the course:
1) Tail call optimization, and the lack thereof in non-functional languages:
Sadly, most other languages do not support TAIL CALL
OPTIMIZATION. Put another way, they do build up a stack
even for tail calls.
Tail call optimization was invented in the mid 70s, long
after the main elements of most languages were developed.
Because they do not have tail call optimization, these
languages provide a fixed set of LOOPING CONSTRUCTS that
make it possible to traverse arbitrary sized data.
a) What are the equivalents to this type of optimization in procedural languages that don't feature it?
b) Do using those equivalents mean we avoid building up a stack in similar situations in languages that don't have it?
2) Mutation and multicore processors
This mechanism is fundamental in almost any other language you
program in. We have delayed introducing it until now for
several reasons:
despite being fundamental, it is surprisingly complex
overuse of it leads to programs that are not amenable
to parallelization (running on multiple processors).
Since multi-core computers are now common, the ability
to use mutation only when needed is becoming more and
more important
overuse of mutation can also make it difficult to
understand programs, and difficult to test them well
But mutable variables are important, and learning this mechanism
will give you more preparation to work with Java, Python and many
other languages. Even in such languages, you want to use a style
called "mostly functional programming".
I learned some Java, Python and C++ before taking this course, so came to take mutation for granted. Now that has been all thrown in the air by the above statement. My questions are:
a) where could I find more detailed information regarding what is suggested in the 2nd bullet, and what to do about it, and
b) what kind of patterns would emerge from a "mostly functional programming" style, as opposed to a more careless style I probably would have had had I continued on with those other languages instead of taking this course?
As Leppie points out, looping constructs manage to recover the space savings of proper tail calling, for the particular kinds of loops that they support. The only problem with looping constructs is that the ones you have are never enough, unless you just hurl the ball into the user's court and force them to model the stack explicitly.
To take an example, suppose you're traversing a binary tree using a loop. It works... but you need to explicitly keep track of the "ones to come back to." A recursive traversal in a tail-calling language allows you to have your cake and eat it too, by not wasting space when not required, and not forcing you to keep track of the stack yourself.
Your question on parallelism and concurrency is much more wide-open, and the best pointers are probably to areas of research, rather than existing solutions. I think that most would agree that there's a crisis going on in the computing world; how do we adapt our mutation-heavy programming skills to the new multi-core world?
Simply switching to a functional paradigm isn't a silver bullet here, either; we still don't know how to write high-level code and generate blazing fast non-mutating run-concurrently code. Lots of folks are working on this, though!
To expand on the "mutability makes parallelism hard" concept, when you have multiple cores going, you have to use synchronisation if you want to modify something from one core and have it be seen consistently by all the other cores.
Getting synchronisation right is hard. If you over-synchronise, you have deadlocks, slow (serial rather than parallel) performance, etc. If you under-synchronise, you have partially-observed changes (where another core sees only a portion of the changes you made from a different core), leaving your objects observed in an invalid "halfway changed" state.
It is for that reason that many functional programming languages encourage a message-queue concept instead of a shared state concept. In that case, the only shared state is the message queue, and managing synchronisation in a message queue is a solved problem.
a) What are the equivalents to this type of optimization in procedural languages that don't feature it? b) Do using those equivalents mean we avoid building up a stack in similar situations in languages that don't have it?
Well, the significance of a tail call is that it can evaluate another function without adding to the call stack, so anything that builds up the stack can't really be called an equivalent.
A tail call behaves essentially like a jump to the new code, using the language trappings of a function call and all the appropriate detail management. So in languages without this optimization, you'd use a jump within a single function. Loops, conditional blocks, or even arbitrary goto statements if nothing else works.
a) where could I find more detailed information regarding what is suggested in the 2nd bullet, and what to do about it
The second bullet sounds like an oversimplification. There are many ways to make parallelization more difficult than it needs to be, and overuse of mutation is just one.
However, note that parallelization (splitting a task into pieces that can be done simultaneously) is not entirely the same thing as concurrency (having multiple tasks executed simultaneously that may interact), though there's certainly overlap. Avoiding mutation is incredibly helpful in writing concurrent programs, since immutable data avoids a lot of race conditions and resource contention that would otherwise be possible.
b) what kind of patterns would emerge from a "mostly functional programming" style, as opposed to a more careless style I probably would have had had I continued on with those other languages instead of taking this course?
Have you looked at Haskell or Clojure? Both are heavily inclined to a very functional style emphasizing controlled mutation. Haskell is more rigorous about it but has a lot of tools for working with limited forms of mutability, while Clojure is a bit more informal and might be more familiar to you since it's another Lisp dialect.
I'm studying multicore parallelism in F#. I have to admit that immutability really helps to write correct parallel implementation. However, it's hard to achieve good speedup and good scalability when the number of cores grows. For example, my experience with Quick Sort algorithm is that many attempts to implement parallel Quick Sort in a purely functional way and using List or Array as the representation are failed. Profiling those implementations shows that the number of cache misses increases significantly compared to those of sequential versions. However, if one implements parallel Quick Sort using mutation inside arrays, a good speedup could be obtained. Therefore, I think mutation might be a good practice for optimizing multicore parallelism.
I believe that cache locality is a big obstacle for multicore parallelism in a functional language. Functional programming involves in creating many short-lived objects; destruction of those objects may destroy coherence property of CPU caches. I have seen many suggestions how to improve cache locality in imperative languages, for example, here and here. But it's not clear to me how they would be done in functional programming, especially with recursive data structures such as trees, etc, which appear quite often.
Are there any techniques to improve cache locality in an impure functional language (specifically F#)? Any advices or code examples are more than welcome.
As far as I can make out, the key to cache locality (multithreaded or otherwise) is
Keep work units in a contiguous block of RAM that will fit into the cache
To this end ;
Avoid objects where possible
Objects are allocated on the heap, and might be sprayed all over the place, depending on heap fragmentation, etc.
You have essentially zero control over the memory placement of objects, to the extent that the GC might move them at any time.
Use arrays. Arrays are interpreted by most compilers as a contiguous block of memory.
Other collection datatypes might distribute things all over the place - linked lists, for example, are composed of pointers.
Use arrays of primitive types. Object types are allocated on the heap, so an array of objects is just an array of pointers to objects that may be distributed all over the heap.
Use arrays of structs, if you can't use primitives. Structs have their fields arranged sequentially in memory, and are treated as primitives by the .NET compilers.
Work out the size of the cache on the machine you'll be executing it on
CPUs have different size L2 caches
It might be prudent to design your code to scale with different cache sizes
Or more simply, write code that will fit inside the lowest common cache size your code will be running on
Work out what needs to sit close to each datum
In practice, you're not going to fit your whole working set into the L2 cache
Examine (or redesign) your algorithms so that the data structures you are using hold data that's needed "next" close to data that was previously needed.
In practice this means that you may end up using data structures that are not theoretically perfect examples of computer science - but that's all right, computers aren't theoretically perfect examples of computer science either.
A good academic paper on the subject is Cache-Efficient String Sorting Using Copying
Allowing mutability within functions in F# is a blessing, but it should only be used when optimizing code. Purely-functional style often yields more intuitive implementation, and hence is preferred.
Here's what a quick search returned: Parallel Quicksort in Haskell. Let's keep the discussion about performance focused on performance. Choose a processor, then bench it with a specific algorithm.
To answer your question without specifics, I'd say that Clojure's approach to implementing STM could be a lesson in general case on how to decouple paths of execution on multicore processors and improve cache locality. But it's only effective when number of reads outweigh number of writes.
I am no parallelism expert, but here is my advice anyway.
I would expect that a locally mutable approach where each core is allocated an area of memory which is both read and written will always beat a pure approach.
Try to formulate your algorithm so that it works sequentially on a contiguous area of memory. This means that if you are working with graphs, it may be worth "flattening" nodes into arrays and replace references by indices before processing. Regardless of cache locality issues, this is always a good optimisation technique in .NET, as it helps keep garbage collection out of the way.
A great approach is to split the work into smaller sections and iterate over each section on each core.
One option I would start with is to look for cache locality improvements on a single core before going parallel, it should be simply a matter of subdividing the work again for each core. For example if you are doing matrix calculations with large matrices then you could split up the calculations into smaller sections.
Heres a great example of that: Cache Locality For Performance
There were some great sections in Tomas Petricek's book Real Work functional programming, check out Chapter 14 Writing Parallel Functional Programs, you might find Parallel processing of a binary tree of particular interest.
To write scalable Apps cache locality is paramount to your application speed. The principles are well explain by Scott Meyers talk. Immutability does not play well with cache locality since you create new objects in memory which forces the CPU to reload the data from the new object again.
As in the talk is noted even on modern CPUs the L1 cache has only 32 KB size which is shared for code and data between all cores. If you go multi threaded you should try to consume as little memory as possible (goodbye immutabilty) to stay in the fastest cache. The L2 cache is about 4-8 MB which is much bigger but still tiny compared to the data you are trying to sort.
If you manage to write an application which consumes as little memory as possible (data cache locality) you can get speedups of 20 or more. But if you manage this for 1 core it might be very well be that scaling to more cores will hurt performance since all cores are competing for the same L2 cache.
To get most out of it the C++ guys use PGA (Profile Guided Optimizations) which allows them to profile their application which is used as input data for the compiler to emit better optimized code for the specific use case.
You can get better to certain extent in a managed code but since so many factors influence your cache locality it is not likely that you will ever see a speedup of 20 in the real world due to total cache locality. This remains the regime of C++ and compilers which use profiling data.
You may get some ideas from these:
Cache-Oblivious http://supertech.csail.mit.edu/cacheObliviousBTree.html Cache-Oblivious Search Trees Project
DSapce#MIT Cache coherence strategies in a many-core processor http://dspace.mit.edu/handle/1721.1/61276
describes the revolutionary idea of cache oblivious algorithms via the elegant and efficient implementation of a matrix multiply in F#.
I know GC wasn't popular in the days when Ada was developed and for the main use case of embedded programming it is still not a good choice.
But considering that Ada is a general purpose programming language why wasn't a partial and optional (traces only explicitly tagged memory objects) garbage collector introduced in later revisions of the language and the compiler implementations.
I simply can't think of developing a normal desktop application without a garbage collector anymore.
Ada was designed with military applications in mind. One of the big priorities in its design was determinism. i.e. one wanted an Ada program to consistently perform exactly the same way every time, in any environment, under all operating systems... that kinda thing.
A garbage collector turns one application into two, working against one another. Java programs develop hiccups at random intervals when the GC decides to go to work, and if it's too slow about it there's a chance that an application will run out of heap sometimes and not others.
Simplified: A garbage collector introduces some variability into a program that the designers didn't want. You make a mess - you clean it up! Same code, same behavior every time.
Not that Ada became a raging worldwide success, mind you.
Because Ada was designed for use in defense systems which control weapons in realtime, and garbage collection interferes with the timing of your application. This is dangerous which is why, for many years, Java came with a warning that it was not to be used for healthcare and military control systems.
I believe that the reason there is no longer such a disclaimer with Java is because the underlying hardware has become much faster as well as the fact that Java has better GC algorithms and better control over GC.
Remember that Ada was developed in the 1970's and 1980's at a time when computers were far less powerful than they are today, and in control applications timing issues were paramount.
First off, there is nothing in the language really that prohibits garbage collection.
Secondly some implementations do perform garbage collection. In particular, all the implementations that target the JVM garbage collect.
Thirdly, there is a way to get some amount of garbage collection with all compilers. You see, when an access type goes out of scope, if you specifially told the language to set aside a certian amount of space for storage of its objects, then that space will be destroyed at that point. I've used this in the past to get some modicum of garbage collection. The declaration voodo you use is:
type Foo is access Blah;
for Foo'storage_size use 100_000_000; --// 100K
If you do this, then all (100K of) memory allocated to Blah objects pointed to by Foo pointers will be cleaned up when the Foo type goes out of scope. Since Ada allows you to nest subroutines inside of other subroutines, this is particularly powerful.
To see more about what storage_size and storage pools can do for you, see LRM 13.11
Fourthly, well-written Ada programs don't tend to rely on dynamic memory allocation nearly as much as C programs do. C had a number of design holes that practicioners learned to use pointers to paint over. A lot of those idioms aren't nessecary in Ada.
the answer is more complicated: Ada does not require a garbage collector, because of real-time constraints and such. however, the language have been cleverly designed so as to allow the implementation of a garbage collector.
although, many (almost all) compilers do not include a garbage collector, there are some notable implementation:
a patch for GNAT
Ada compilers targeting the Java Virtual Machine (i don't know if those projects are still supported). It used the garbage collector of the JVM.
there are plenty other sources about garbage collection in Ada around the web. this subject has been discussed at length, mainly because of the fierce competition with Java in the mid '90s (have a look at this page: "Ada 95 is what the Java language should have been"), when Java was "The Next Big Thing" before Microsoft drew C#.
First off, I'd like to know who's using Ada these days. I actually like the language, and there's even a GUI library for Linux/Ada, but I haven't heard anything about active Ada development for years. Thanks to its military connections, I'm really not sure if it's ancient history or so wildly successful that all mention of its use is classified.
I think there's a couple of reason for no GC in Ada. First, and foremost, it dates back to an era where most compiled languages used primarily stack or static memory, or in a few cases, explicit heap allocate/free. GC as a general philosophy really only took off about 1990 or so, when OOP, improved memory management algorithms and processors powerful enough to spare the cycles to run it all came into their own. What simply compiling Ada could do to an IBM 4331 mainframe in 1989 was simply merciless. Now I have a cell phone that can outperform that machine's CPU.
Another good reason is that there are people who think that rigorous program design includes precise control over memory resources, and that there shouldn't be any tolerance for letting dynamically-acquired objects float. Sadly, far too many people ended up leaking memory as dynamic memory became more and more the rule. Plus, like the "efficiency" of assembly language over high-level languages, and the "efficiency" of raw JDBC over ORM systems, the "efficiency" of manual memory management tends to invert as it scales up (I've seen ORM benchmarks where the JDBC equivalent was only half as efficient). Counter-intuitive, I know, but these days systems are much better at globally optimizing large applications, plus they're able to make radical re-optimizations in response to superficially minor changes.Including dynamically re-balancing algorithms on the fly based on detected load.
I'm afraid I'm going to have to differ with those who say that real-time systems can't afford GC memory. GC is no longer something that freezes the whole system every couple of minutes. We have much more intelligent ways to reclaim memory these days.
Your question is incorrect. It does. See the package ada.finalization which handles GC for you.
I thought I'd share a really simple example of how to implement a Free() procedure (which would be used in a way familiar to all C programmers)...
with Ada.Integer_Text_IO, Ada.Unchecked_Deallocation;
use Ada.Integer_Text_IO;
procedure Leak is
type Int_Ptr is access Integer;
procedure Free is new Ada.Unchecked_Deallocation (Integer, Int_Ptr);
Ptr : Int_Ptr := null;
begin
Ptr := new Integer'(123);
Free (Ptr);
end Leak;
Calling Free at the end of the program will return the allocated Integer to the Storage Pool ("heap" in C parlance). You can use valgrind to demonstrate that this does in fact prevent 4 bytes of memory being leaked.
The Ada.Unchecked_Deallocation (a generically defined procedure) can be used on (I think) any type that may be allocated using the "new" keyword. The Ada Reference Manual ("13.11.2 Unchecked Storage Deallocation") has more details.