Functional programming in nuclear plants? - functional-programming

After reading this question I just wondered whether it would be a good idea to use Haskell (or other functional programming languages) in mission critical industries.
Apart from Erlang, most languages followed imperative/design-by-contract paradigms (Ada, Eiffel, C++).
But what about the functional ones?
The resulting code would be easily maintainable, stable and lots of potential bugs could be eliminated by their strict type systems at compile-time.
Or is lazy evaluation more dangerous than helpful? Are there other security drawbacks?

I think you could. The language seems well suited for such situations, assuming you trust the compiler enough to use it in mission critical situation.
Remember that in mission critical situations it is not only your code that is under scrutiny, but all other components too. That includes compiler (Haskell compiler is not among the easiest ones to code review), appropriate certified hardware that runs the software, appropriate hardware that compiles your code, hardware that bootstraps the compilation of the compiler that will compile your code, hell - even wires that connect that all to the power grid and frequency of voltage change in the socket.
If you are interested in looking at mission critical software quality, I suggest looking at NASA software quality procedures. They are very strict and formal, but well these guys throw millions of dollars in space in hope it will survive pretty rough conditions and will make it to Mars or wherever and then autonomously operate and send some nice photos of Martians back to earth.
So, there you go: Haskell is good for mission critical situations, but it'd be an expensive process to bootstrap its usage there.

Related

SIGNAL vs Esterel vs Lustre

I'm very interested in dataflow and concurrency focused languages. I've read up on the subject and repeatedly I see SIGNAL, Esterel, and Lustre mentioned; so I take it they're prominent players in those fields. However, many of their links in the resources I found are dead and they don't seem very accessible. I managed to find a couple compilers I can compile from source (Polychrony Toolset for SIGNAL and the Columbia Compiler for Esterel) but they've both had issues when trying to compile with cmake. Even textbooks teaching these languages have been tough to come by.
With the background of the way, my actual questions are: is anyone really familiar with this field of programming? Are these languages still big deals, or have they "died out" by now? Could it be they're just available to big companies with a hefty price tag, so the average programmer wouldn't really be able to pick those languages up?
I ran into a couple other dataflow/concurrent paradigm languages, such as Oz or E, but they seemed to be mostly for education and not suitable for real world projects. Not to say they aren't impressive languages, but their implementation was limited and it would be unlikely to see them in production contexts. Does anyone know of other languages in this field they can recommend that are actually accessible (have good documentation, tutorials, and an installable compiler to actually code in)? Or can anyone clarify a language such as Oz or E and hopefully show that they indeed are good enough for large real world projects?
All the languages you mentioned are not widespread. This means their compilers and runtime have bugs, the community is narrow and can give little help, and linking with general purpose libraries can be problematic.
I recommend to use an actively supported general purpose language such as Java, Scala, Kotlin or C++. They all have libraries to support asynchronous computations, and dataflow is no more than support of asynchronous procedure call. You even can develop your own dataflow library. This is not that hard: I wrote a dataflow library for Java which is only 40 kilobytes of source code.
Have you tried Céu? It is a recent variant of Esterel, and compiles to C. It is simple to understand, and provides a reactive and concurrent structuring of control flow. Native C calls can be made by just prefixing them with an underscore ("_printf").
http://ceu-lang.org
Also, see the paper "Structured Synchronous Reactive Programming with Céu" for a nice overview.
http://www.ceu-lang.org/chico/ceu_mod15_pre.pdf
These academics languages mostly disappeared as such and are used in industrial tools
Esterel-Lustre are the basis of in Ansys' SCADE
Signal is used in 3DS' ControlBuild
Esterel was used in Synopsys' ConcentricStudio.
Researchers use also Heptagon for synchronous language studies for code generation, formal methods, new concepts.

How is Ada a 'safety critical' language?

I tried googling and read some snippets online. Why is Ada a "safety critical" language? Some things I notice are
No pointers
Specify a range (this type is an integer but can only be 1-12)
Explicitly state if a function parameter is out or in/out
Range-based loops (To avoid bound errors or bound checking)
The rest of the syntax I either didn't understand or didn't see how it helps it to be 'safety critical'. These are some points but I don't see the big picture. Does it have design-by-contract that I am not seeing? Does it have rules that make it harder for code to compile (and if so what are some?) Why is it a 'safety critical' language?
Well, this is pretty simple. The reason there's a lot of Ada sytax that doesn't seem to have much of anything to do with making the language "safety-critical" (whatever that means for a language) is that this was not Ada's design goal. It was designed to be a general-purpose compiled system's programming language, sufficently capable that the U.S. Department of Defense could get rid of all its little one-use languages it had to support all over the place.
The fact that the end result is a language that is rather useful for safety-critical applications was just a happy side-effect of the fact that the language was very well-designed with military applications (where lives are often staked on the software's reliability) in mind.
Surprisingly few other modern languages had support for building reliable software as a design goal. Most seem to be cooked up by a lone "genius" hacker, with the chief goal being the ability to facilitate cranking out lots of code quickly, perhaps in some new way that hacker favors.
All of those are good for safety-critical application; but consider also the ability to assign a layout (down to the bits) and the ability to [optionally] specify that such a record can ONLY be at a certain location (useful for things like video-memory mappings).
Consider that a lot of safety-critical applications are also without standard (in the senses both of "wide-spread" and of forward-comparability) interfaces; example: nuclear reactors, rocket engines (the engineering itself differs from generation to generation*), models-of-aircraft.
The upcoming Ada 2012 standard DOES have actual contracts, in the form of pre- and post-conditions; example (taken from http://www2.adacore.com/wp-content/uploads/2006/03/Ada2012_Rational_Introducion.pdf):
generic
type Item is private;
package Stacks is
type Stack is private;
function Is_Empty(S: Stack) return Boolean;
function Is_Full(S: Stack) return Boolean;
procedure Push(S: in out Stack; X: in Item)
with
Pre => not Is_Full(S),
Post => not Is_Empty(S);
procedure Pop(S: in out Stack; X: out Item)
with
Pre => not Is_Empty(S),
Post => not Is_Full(S);
Stack_Error: exception;
private
-- Private portion.
end Stacks;
Also, another thing that gets glossed over, is the ability to exclude Null from your Access/pointer types; this is useful in that you can a) specify that exclusion in your subprogram parameters, and b) to streamline your algorithm [since you don't have to check for null at every instance-of-use], and c) letting your exception handler handle the (I assume) exceptional circumstance of a Null.
* The Arianne 5 disaster occurred precisely because the management disregarded this fact and had the programmers use the incorrect specifications: that of the Arianne 4.
AdaCore has a nice presentation of various safety features of Ada 2005 here:
http://www.adacore.com/knowledge/technical-papers/safe-secure/
The US Government and industry also ran studies on program reliability a long time ago that compared languages. I couldn't find one very quickly as the sites are all old (!) but here's a quote from DDCI's website you: " In studies conducted during the eighties Ada consistently outperformed established programming languages like Pascal, Fortran, and C. In the nineties, Ada continues to surpass C++ in performance evaluations measuring capability, efficiency, maintenance, risk, and lifecycle cost."
Lists reasons they used it on Commanche project in link below. I'll add that the platform implementations have been around for a LONG time and stayed stable. Like a source in article said, maintenance is where majority of costs come in. We've seen the modern contenders .NET and Java change like crazy. Long-term stability of Ada is better for safety-critical apps which are often fielded for long periods (sometimes decades).
http://www.ddci.com/displayNews.php?fn=programs_rah66.php
Another benefit is Ada was designed for cross-language development. I keep seeing in the news people talking about how .NET and JVM are innovative b/c they let you mix the "right tools for the job" into one system. Ada's had that ability for a long time. It's common for apps to be a mix of Ada, C, C++, assembler, etc. (MULTOS CA comes to mind.) And they still function well.
It hasn't been static either. They keep updating the language, most recently in 2012. Its portability allowed it to run on both JVM and .NET too for people that want the libraries or have plenty existing code on those. There's also Ada development tools and robust runtimes for many OS's and RTOS's from IBM, Aonix, AdaCore, and Green Hills.
Last benefit: if it will compile, it will work. Usually.
I know this is late, but it's an example I recently encountered. I wrote the following C++ function that worked fine in -O0:
size_t get_index(const Monomial & t, const Monomial & u) {
get_index(t, u.log()); // forgot to type "return" first...
}
This actually compiles, and while it might emit a warning if you're lucky to have a decent compiler, you're not likely to see it when it's one of a lot of programs being compiled. Miraculously, it ran just fine when I compiled it with -O0. But when I compiled it in -O3 it crashed every time, and for the life of me I couldn't figure out why, because I didn't see the warning (if it even appeared). Imagine debugging that when you think you imagine a return there simply because you know your intent.
Likewise, back when I was learning C I frequently made this mistake:
int a;
scanf("%d", a); /* left out & before the a */
Using int's for pointers and vice versa is considered normal programming practice in C, so much so that compilers 25 years ago didn't even bother to emit a warning. Heck, that was a feature of C, not a bug. (See, for instance, Brian Kernaghan's "Why Pascal is not my favorite Language.") And of course back then home computer OS's didn't have memory protection; if you were lucky the computer wasn't writing to hard disk when it reset.
These kinds of mistakes won't even compile in Ada. Functions have to return a value, and you cannot accidentally use an Integer in place of an access Integer (i.e., pointer to integer).
And that's just the start!
Ada has superior support for real-time http://www.adaic.org/resources/add_content/standards/05rat/html/Rat-1-3-4.html . This allows the programmer to implement a greater degree of determinism rather than worry about the technical details of the programming language itself. While many of the run-time features supported by Ada can be achieved in C with some deep understanding of things, Ada achieves most real-time features very well since it's standardized. Ada even has a Ravenscar profile http://en.wikipedia.org/wiki/Ravenscar_profile and SPARK a computer language where "highly reliable operation is essential" is based on a subset of Ada 83 and 95 http://en.wikipedia.org/wiki/SPARK_%28programming_language%29. My guess is that SPARK does not have a version for later Ada versions b/c it is too early to tell how safe the newer versions really are. It's also mentioned in the latter article that Ada can be optimized for speeds rivaling C which would be important for real-time applications that relied on precise control during rapidly changing events. There are many built in standard features for real-time control which are obviously important for a 'safety critical' language.

Do functional languages cope well with complexity?

I am curious how functional languages compare (in general) to more "traditional" languages such as C# and Java for large programs. Does program flow become difficult to follow more quickly than if a non-functional language is used? Are there other issues or things to consider when writing a large software project using a functional language?
Thanks!
Functional programming aims to reduce the complexity of large systems, by isolating each operation from others. When you program without side-effects, you know that you can look at each function individually - yes, understanding that one function may well involve understanding other functions too, but at least you know it won't interfere with some other piece of system state elsewhere.
Of course this is assuming completely pure functional programming - which certainly isn't always the case. You can use more traditional languages in a functional way too, avoiding side-effects where possible. But the principle is an important one: avoiding side-effects leads to more maintainable, understandable and testable code.
Does program flow become difficult to follow more quickly than if a >non-functional language is used?
"Program flow" is probably the wrong concept to analyze a large functional program. Control flow can become baroque because there are higher-order functions, but these are generally easy to understand because there is rarely any shared mutable state to worry about, so you can just think about arguments and results. Certainly my experience is that I find it much easier to follow an aggressively functional program than an aggressively object-oriented program where parts of the implementation are smeared out over many classes. And I find it easier to follow a program written with higher-order functions than with dynamic dispatch. I also observe that my students, who are more representative of programmers as a whole, have difficulties with both inheritance and dynamic dispatch. They do not have comparable difficulties with higher-order functions.
Are there other issues or things to consider when writing a large
software project using a functional language?
The critical thing is a good module system. Here is some commentary.
The most powerful module system I know of the unit system of PLT Scheme designed by Matthew Flatt and Matthias Felleisen. This very powerful system unfortunately lacks static types, which I find a great aid to programming.
The next most powerful system is the Standard ML module system. Unfortunately Standard ML, while very expressive, also permits a great many questionable constructs, so it is easy for an amateur to make a real mess. Also, many programmers find it difficult to use Standard ML modules effectively.
The Objective Caml module system is very similar, but there are some differences which tend to mitigate the worst excesses of Standard ML. The languages are actually very similar, but the styles and idioms of Objective Caml make it significantly less likely that beginners will write insane programs.
The least powerful/expressive module system for a functional langauge is the Haskell module system. This system has a grave defect that there are no explicit interfaces, so most of the cognitive benefit of having modules is lost. Another sad outcome is that while the Haskell module system gives users a hierarchical name space, use of this name space (import qualified, in case you're an insider) is often deprecated, and many Haskell programmers write code as if everything were in one big, flat namespace. This practice amounts to abandoning another of the big benefits of modules.
If I had to write a big system in a functional language and had to be sure that other people understood it, I'd probably pick Standard ML, and I'd establish very stringent programming conventions for use of the module system. (E.g., explicit signatures everywhere, opague ascription with :>, and no use of open anywhere, ever.) For me the simplicity of the Standard ML core language (as compared with OCaml) and the more functional nature of the Standard ML Basis Library (as compared with OCaml) are more valuable than the superior aspects of the OCaml module system.
I've worked on just one really big Haskell program, and while I found (and continue to find) working in Haskell very enjoyable, I really missed not having explicit signatures.
Do functional languages cope well with complexity?
Some do. I've found ML modules and module types (both the Standard ML and Objective Caml) flavors invaluable tools for managing complexity, understanding complexity, and placing unbreachable firewalls between different parts of large programs. I have had less good experiences with Haskell
Final note: these aren't really new issues. Decomposing systems into modules with separate interfaces checked by the compiler has been an issue in Ada, C, C++, CLU, Modula-3, and I'm sure many other languages. The main benefit of a system like Standard ML or Caml is the that you get explicit signatures and modular type checking (something that the C++ community is currently struggling with around templates and concepts). I suspect that these issues are timeless and are going to be important for any large system, no matter the language of implementation.
I'd say the opposite. It is easier to reason about programs written in functional languages due to the lack of side-effects.
Usually it is not a matter of "functional" vs "procedural"; it is rather a matter of lazy evaluation.
Lazy evaluation is when you can handle values without actually computing them yet; rather, the value is attached to an expression which should yield the value if it is needed. The main example of a language with lazy evaluation is Haskell. Lazy evaluation allows the definition and processing of conceptually infinite data structures, so this is quite cool, but it also makes it somewhat more difficult for a human programmer to closely follow, in his mind, the sequence of things which will really happen on his computer.
For mostly historical reasons, most languages with lazy evaluation are "functional". I mean that these language have good syntaxic support for constructions which are typically functional.
Without lazy evaluation, functional and procedural languages allow the expression of the same algorithms, with the same complexity and similar "readability". Functional languages tend to value "pure functions", i.e. functions which have no side-effect. Order of evaluation for pure function is irrelevant: in that sense, pure functions help the programmer in knowing what happens by simply flagging parts for which knowing what happens in what order is not important. But that is an indirect benefit and pure functions also appear in procedural languages.
From what I can say, here are the key advantages of functional languages to cope with complexity :
Functional programming hates side-effects.
You can really black-box the different layers
and you won't be afraid of parallel processing
(actor model like in Erlang is really easier to use
than locks and threads).
Culturally, functional programmer
are used to design a DSL to express
and solve a problem. Identifying the fundamental
primitives of a problem is a radically
different approach than rushing to the brand
new trendy framework.
Historically, this field has been led by very smart people :
garbage collection, object oriented, metaprogramming...
All those concepts were first implemented on functional platform.
There is plenty of literature.
But the downside of those languages is that they lack support and experience in the industry. Having portability, performance and interoperability may be a real challenge where on other platform like Java, all of this seems obvious. That said, a language based on the JVM like Scala could be a really nice fit to benefit from both sides.
Does program flow become difficult to
follow more quickly than if a
non-functional language is used?
This may be the case, in that functional style encourages the programmer to prefer thinking in terms of abstract, logical transformations, mapping inputs to outputs. Thinking in terms of "program flow" presumes a sequential, stateful mode of operation--and while a functional program may have sequential state "under the hood", it usually isn't structured around that.
The difference in perspective can be easily seen by comparing imperative vs. functional approaches to "process a collection of data". The former tends to use structured iteration, like a for or while loop, telling the program "do this sequence of tasks, then move to the next one and repeat, until done". The latter tends to use abstracted recursion, like a fold or map function, telling the program "here's a function to combine/transform elements--now use it". It isn't necessary to follow the recursive program flow through a function like map; because it's a stateless abstraction, it's sufficient to think in terms of what it means, not what it's doing.
It's perhaps somewhat telling that the functional approach has been slowly creeping into non-functional languages--consider foreach loops, Python's list comprehensions...

Is a functional language a good choice for a Flight Simulator? How about Lisp?

I have been doing object-oriented programming for a few years now, and I have not done much functional programming. I have an interest in flight simulators, and am curious about the functional programming aspect of Lisp. Flight simulators or any other real world simulator makes sense to me in an object-oriented paradigm.
Here are my questions:
Is object oriented the best way to represent a real world simulation domain?
I know that Common Lisp has CLOS (OO for lisp), but my question is really about writing a flight simulator in a functional language. So if you were going to write it in Lisp, would you choose to use CLOS or write it in a functional manner?
Does anyone have any thoughts on coding a flight simulator in lisp or any functional language?
UPDATE 11/8/12 - A similar SO question for those interested -> How does functional programming apply to simulations?
It's a common mistake to think of "Lisp" as a functional language. Really it is best thought of as a family of languages, probably, but these days when people say Lisp they usually mean Common Lisp.
Common Lisp allows functional programming, but it isn't a functional language per se. Rather it is a general purpose language. Scheme is a much smaller variant, that is more functional in orientation, and of course there are others.
As for your question is it a good choice? That really depends on your plans. Common Lisp particularly has some real strengths for this sort of thing. It's both interactive and introspective at a level you usually see in so-called scripting languages, making it very quick to develop in. At the same time its compiled and has efficient compilers, so you can expect performance in the same ballpark as other efficient compilers (with a factor of two of c is typical ime). While a large language, it has a much more consistent design than things like c++, and the metaprogramming capabilities can make very clean, easy to understand code for your particular application. If you only look at these aspects
common lisp looks amazing.
However, there are downsides. The community is small, you won't find many people to help if that's what you're looking for. While the built in library is large, you won't find as many 3rd party libraries, so you may end up writing more of it from scratch. Finally, while it's by no means a walled garden, CL doesn't have the kind of smooth integration with foreign libraries that say python does. Which doesn't mean you can't call c code, there are nice tools for this.
By they way, CLOS is about the most powerful OO system I can think of, but it is quite a different approach if you're coming from a mainstream c++/java/c#/etc. OO background (yes, they differ, but beyond single vs. multiple inh. not that much) you may find it a bit strange at first, almost turned inside out.
If you go this route, you are going to have to watch for some issues with performance of the actual rendering pipeline, if you write that yourself with CLOS. The class system has incredible runtime flexibility (i.e. updating class definitions at runtime not via monkey patching etc. but via actually changing the class and updating instances) however you pay some dispatch cost on this.
For what it's worth, I've used CL in the past for research code requiring numerical efficiency, i.e. simulations of a different sort. It works well for me. In that case I wasn't worried about using existing code -- it didn't exist, so I was writing pretty much everything from scratch anyway.
In summary, it could be a fine choice of language for this project, but not the only one. If you don't use a language with both high-level aspects and good performance (like CL has, as does OCaml, and a few others) I would definitely look at the possibility of a two level approach with a language like lua or perhaps python (lots of libs) on top of some c or c++ code doing the heavy lifting.
If you look at the game or simulator industry you find a lot of C++ plus maybe some added scripting component. There can also be tools written in other languages for scenery design or related tasks. But there is only very little Lisp used in that domain. You need to be a good hacker to get the necessary performance out of Lisp and to be able to access or write the low-level code. How do you get this knowhow? Try, fail, learn, try, fail less, learn, ... There is nothing but writing code and experimenting with it. Lisp is really useful for good software engineers or those that have the potential to be a good software engineer.
One of the main obstacles is the garbage collector. Either you have a very simple one (then you have a performance problem with random pauses) or you have a sophisticated one (then you have a problem getting it working right). Only few garbage collectors exist that would be suitable - most Lisp implementations have good GC implementations, but still those are not tuned for real-time or near real-time use. Exceptions do exist. With C++ you can forget the GC, because there usually is none.
The other alternative to automatic memory management with a garbage collector is to use no GC and manage memory 'manually'. This is used by some (even commercial) Lisp applications that need to support some real-time response (for example process control expert systems).
The nearest thing that was developed in that area was the Crash Bandicoot (and also later games) game for the Playstation I (later games were for the Playstation II) from Naughty Dog. Since they have been bought by Sony, they switched to C++ for the Playstation III. Their development environment was written in Allegro Common Lisp and it included a compiler for a Scheme (a Lisp dialect) variant. On the development system the code gets compiled and then downloaded to the Playstation during development. They had their own 3d engine (very impressive, always got excellent reviews from game magazines), incremental level loading, complex behaviour control for lots of different actors, etc. So the Playstation was really executing the Scheme code, but memory management was not done via GC (afaik). They had to develop all the technology on their own - nobody was offering Lisp-based tools - but they could, because their were excellent software developers. Since then I haven't heard of a similar project. Note that this was not just Lisp for scripting - it was Lisp all the way down.
One the Scheme side there is also a new interesting implementation called Ypsilon Scheme. It is developed for a pinball game - this could be the base for other games, too.
On the Common Lisp side, there have been Lisp applications talking to flight simulators and controlling aspects of them. There are some game libraries that are based on SDL. There are interfaces to OpenGL. There is also something like the 'Open Agent Engine'. There are also some 3d graphics applications written in Common Lisp - even some complex ones. But in the area of flight simulation there is very little prior art.
On the topic of CLOS vs. Functional Programming. Probably one would use neither. If you need to squeeze all possible performance out of a system, then CLOS already has some overheads that one might want to avoid.
Take a look at Functional Reactive Programming. There are a number of frameworks for this in Haskell (don't know about other languages), most of which are based around arrows. The basic idea is to represent relationships between time-varying values and events. So for example you would write (in Haskell arrow notation using no particular library):
velocity <- {some expression of airspeed, heading, gravity etc.}
position <- integrate <- velocity
The second line declares the relationship between position and velocity. The <- arrow operators are syntactic sugar for a bunch of library calls that tie everything together.
Then later on you might say something like:
groundLevel <- getGroundLevel <- position
altitude <- getAltitude <- position
crashed <- liftA2 (<) altitude groundLevel
to declare that if your altitude is less than the ground level at your position then you have crashed. Just as with the other variables here, "crashed" is not just a single value, its a time-varying stream of values. That is why the "liftA2" function is used to "lift" the comparison operator from simple values to streams.
IO is not a problem in this paradigm. Inputs are time varying values such as joystick X and Y, while the image on the screen is simply another time varying value. At the very top level your entire simulator is an arrow from the inputs to the outputs. Then you call a "run" function that converts the arrow into an IO action that runs the game.
If you write this in Lisp you will probably find yourself creating a bunch of macros that basically re-invent arrows, so it might be worth just finding out about arrows to start with.
I don't know anything about flight sims, and you haven't listed anything in particular they consist of, so this is mostly a guess about writing a FS in Lisp.
Why not:
Lisp excels at exploratory programming. I think that since FSs been around so long, and there are free and open-source examples, that it would not benefit as much from this type of programming.
Flight sims are mostly (I'm guessing) written in static, natively compiled languages. If you're looking for pure runtime performance, in Lisp this tends to mean type declarations and other not-so-Lispy constructs. If you don't get the performance you want with naive approaches, your optimized-Lisp might end up looking a lot like C, and Lisp isn't as good at C at writing C.
A lot of a FS, I'm guessing, is interfacing to a graphics library like OpenGL, which is written in C. Depending on how your FFI / OpenGL bindings are, this might, again, make your code look like C-in-Lisp. You might not have the big win that Lisp does in, say, a web app (which consists of generating a tree structure of plain text, which Lisp is great at).
Why:
I took a glance at the FlightGear source code, and I see a lot of structural boilerplate -- even a straight port might end up being half the size.
They use strings for keys all over the place (C++ doesn't have symbols). They use XML for semi-human-readable config files (C++ doesn't have a runtime reader). Simply switching to native Lisp constructs here could be big win for minimal effort.
Nothing looks at all complex, even the "AI". It's simply a matter of keeping everything organized, and Lisp will be great at this because it'll be a lot shorter.
But the neat thing about Lisp is that it's multi-paradigm. You can use OO for organizing the "objects", and FP for computation within each object. I say just start writing and see where it takes you.
I would first think of the nature of the simulation.
Some simulations require interaction like a flight simulator. I don't think functional programming may be a good choice for an interactive (read: CPU intensive/response-critical) applicaiton. Of course, if you have access to 8 PS3's wired together with Linux, you'll not care too much about performance.
For simulations like evolutionary/genetic programming where you set it up and let 'er rip, a functioonal lauguage may help model the problem domain better than an OO language. Not that I'm an expert in functional programming but the ease of coding recursion and the idea of lazy evaluation common in functional languages seems to me a good fit for the 'let her rip' sort of sims.
I wouldn't say functional programming lends itself particularly well to flight simulation. In general, functional languages can be very useful for writing scientific simulations, though this is a slightly specialised case. Really, you'd probably be better off with a standard imperative (preferably OOP) language like C++/C#/Java, as they would tend to have the better physics libraries as well as graphics APIs, both of which you would need to use very heavily. Also, the OOP approach might make it easier to represent your environment. Another point to consider is that (as far as I know) the popular flight simulators on the market today are written pretty much entirely in C++.
Essentially, my philosophy is that if there's no particularly good reason that you should need to use functional paradigms, then don't use a functional language (though there's nothing to stop you using functional constructs in OOP/mixed languages). I suspect you're going to have a lot less painful of a development process using the well-tested APIs for C++ and languages more commonly associated with game development (which has many commonalities with flight sim). Now, if you want to add some complex AI to the simulator, Lisp might seem like a rather more obvious choice, though even then I wouldn't at all jump for it. And finally, if you're really keen on using a functional language, I would recommend you go with one of the more general purpose ones like Python or even F# (both mixed imperative-functional languages really), as opposed to Lisp, which could end up getting rather ugly for such a project.
There are a few problems with functional languages, and that is they don't mesh well with state, but they do go well with process. So in a way it could be said they are action oriented. This means you'll be wasting your time simulating a plane, what you want to do is simulate the actions of flying a plane. Once you grim that you can probably get it to work.
Now as side point, haskell wouldn't be good IMHO, because it's too abstract for a "game", this sort of app is all about Input/Output, but Haskell is about avoiding IO, so it'll become a monad nightmare, and you'll be working against the language. Lisp is a better choice, or Lua or Javascript, they are also functional, but not purely functional, so for your case try Lisp. Anyways in any of these languages your graphics will be C or C++.
A serious issue however is there is very little documentation, and less tutorials about Functional languages and "games", of course scientific simulations is academically documented but those papers are quite dense, if you succeed maybe you could write you experiences, for others as it's a rather empty field right now

What are the practical advantages of learning Assembly? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
Most people suggest that learning assembly is essential, its important to know the underlying workings of the computer, and so forth. But what I'm looking for are some practical suggestions that will make the effort of learning Assembly to be worth it.
What are your suggestions? What am I missing out on by not learning Assembly and pointers/memory management in general?
I think the main practical advantage to learning low-level things like assembly language, pointers, and memory management is that when you're writing or reviewing high-level code you're better able to instinctively or subconsciously spot performance issues or other pitfalls.
An average developer, might write a simple loop and think, "This code iterates over a set of integers and writes each to the console."
An expert developer might write the same loop and think, "This code iterates over a set of integers, and has to box each element to call the ToString method and ToString has to format the string in base 10 which is somewhat non-trivial, and then both the boxed integer and the formatted string will soon be eligible for garbage collection as no references will remain, and the first time this method runs, it will need to be JIT'ed..." and so on.
9 times out of 10, it may not matter. But that 1 time out of 10, the expert developer is likely to notice a problem in code that the average developer would never think to consider.
Pointers/memory management are more general than assembly language. You need to understand them for C and C++ as well, which you might need if you have to maintain code written in C.
For assembly language, it is sometimes useful to read the assembler code that the C compiler generates, to find out whether it generates correct and efficient code.
You need to learn to read assembly so you can figure out what goes wrong when a complex statement bombs out. The CPU debug window shouldn't be a mysterious place.
This is sort of one of those questions that will always be asked: "Why should I know anything." etc. Well, perhaps you could get a job doing something besides building the next generic CRUD application or something like that. If you want to do any sort of system development, having a working knowledge of assembly is very helpful, if not vital. As far as what you're "missing out" perhaps you are missing out on actually knowing how computers work. Some people think this is desirable. Some people don't. Some people build processors. Some people dig ditches. It's all a matter of personal preference :)
I think it's great to learn new languages. It opens my mind. Some languages are more mind-opening than others. I'd say assembler is one of those. It forces you to think about stuff like the call stack and instruction pointer. And it'll make you appreciate higher level languages even more. Another fun language to learn is PostScript.
I don't think you need to learn assembly for anything practical. However, it will ensure that you understand the real roots of what you are doing as a developer. In essence, assembly programming is a discipline for learning chip logic and architecture. I haven't programmed assembly in over two decades but it still informs the kinds of choices I make when programming C#.
But what I'm looking for are some practical suggestions that will make the effort of learning Assembly to be worth it.
Learn what assembly is.
Really learn how to read (and understand) small fragments of it: how to walk/step through it in your mind.
Perhaps too, step through some of it with a debugger (including seeing memory and registers being changed).
Ideally, find some annotated assembly.
But, don't bother to learn how to write assembly: instead, learning to write C or C++ is probably 'low' enough for most practical purposes.
Well, on a practical level I did a class in 6502 assembler when I was first learning to code the early 80s. I also did some 8088 assembler. It's been of occasional use of the years since but I can't say it's ever really got my out of a hole on more than one or two occasions in 25 years. Groking C at a pretty fundamental level is of far more use. YMMV and it's certainly helpful as background, but as a direct practical benefit? Marginal really.
Perversely though one thing that has proved useful is at an even lower level. I did a class on chip design (NAND gates and the like) and as part of that was taught formal Boolean logic at some depth. That's been massively useful ever since - it's surprising the number of coders who don't really know what they are doing with ands, ors and nots :-)
Pointers and memory management are really a different question than assembly. If you want to do C/C++, then you need to learn pointers and memory management, because those are part of the language. But, even if you plan to use nothing but (say) Java all your life, you should learn something about memory management to keep from writing a memory leak despite the GC, and pointers are just the difference between atomic types and object references. You need the concepts or you'll write programs that don't work!
Practical reasons for learning assembly: debugging and optimization. Even if you don't write any assembly, one of these days you may need to optimize C/C++ code for performance. In that case, you'll need to be able to read the assembly for your inner loop, even if you never need to write another line of it.
Ultimately, I think your distinction between "knowing the underlying workings of your computer" and "practical suggestions that will make the effort of learning assembly worth it" is a false one. Ignorance does not pay. Learning how your computer works is a practical suggestion worth the effort!
I have a prophecy: someday soon, your program will run far too slowly to be practical, and crash intermittently with an out-of-memory exception. On that day, the sheer screaming anxiety of not knowing what the hell is going on or where to start looking in order to fix it will refund your karma debt, with interest...
These days many assembly languages are actually fairly high level.
And it's always been true that if you learn 'C', that's close enough to assembly to get most of the learning benefits.
edit: thinking about this a bit more, in Knuth's books he describes an idealised assembly language. You won't go far wrong learning that, and reading those books.
Another practical reason I can think of is reverse engineering application code to modify it for educational purposes ONLY, since this is widely used by crackers to bypass shareware application protections like time-limit or serial numbers.
An application like win32Dasm can convert executables into assembly code that can later be modified with a Hex editor like hiew. You can learn quite a lot about the flow of the program.
I think learning about computer architecture, in conjunction of assembly, would open your mind quite a bit.
It would help explain lots of performance issues - e.g. parser's slow because there's lots of branches, and pipeline gets flushed very easily, branch predictor cannot compensate for everything.
Also, different architectures have their quirks. Someone talked about an assembly trick to swap 2 registers in place, involving xor's. It works, and it would run great for in-order execution core (most recent example would be the Intel Atom, and the Via C7 in netbooks), but not so great in out-of-order cores.
Knowing that may help you to detect poorly compiled code by inspecting it in assembly, and possibly be able to write code in higher-level language to sidestep the imperfection of compiler optimizers. I'm not trying to diss them, but they just can't be perfectly in tuned.
The biggest practical advantage to learning Assembly is performance. You can optimize to near perfection when its required.

Resources