lowest level language until asp.net? - asp.net

it's assembler right? can someone please point out the progression that we've had in programming languages since assembler to the days of asp.net, namely the chronological order of languages?

Here's a wiki timeline of all programming languages.
I would include a FTA table, but the list is very robust and extensive.
And also, the lowest language you ever get to is assembly (aside from straight up issuing machine instructions), regardless of what other language is built on top (including ASP.NET). Other languages are really just abstractions on top of assembly. In fact, ASP.NET gets compiled into IL (Intermediate Language) code, which then get's JITed into assembly. Assembly is as close to the metal as you're going to get.

To be pedantic, "assembler" is not actually a language (any more than "compiler" is;-) -- rather, it's a program that takes a source file in "assembly language" and emits binary machine code. The binary machine code can be said to be lower-level than the assembly language, since the latter allows use of some symbols and often includes a macro processing ability as well.
"Below" binary machine code, there may be other levels, known as "microcode" (but there might not be -- the CPU might be implemented entirely in real hardware, without any microprogramming aspect). That might be relevant only if the system's architecture allowed programmers to alter the microcode, especially by adding to it, etc -- there have been machines that did that, but I don't believe any currently commercialized CPU does. So you probably don't have to care about that (and the by-now-esoteric distinctions between vertical and horizontal microcode, etc, etc;-).

Programming languages are just ways to assemble solutions to computing problems.
The argument is "assembled out of what?"
From that point of view, I'd suggest the following evolutionary curve:
Napier's Bones
Babbage's difference engine
Jacquard (card) looms
(Conceptual) Abstract Turing machines/Post Systems/Church's calculus
Relay Computers (Aiken?)
Vacuum tubes as switching elements (Eniac)
Transistor-based computers
Microprogrammed machines
Integrated Circuits
Large Scale Circuits
with "assembler" being the programming language used to
put together solutions consisting of instructions for
real machines starting with the vacuum tube systems.
(I'm not sure the relay machines actually had assemblers).
Programming langauges are just ways to put together high
level commands that reduce in effect to assembler instructions.

There are two different dimensions to consider here, what I'd call vertical growth (languages build up over time from one generation to the next) and horizontal growth (syntactic improvements and reduction in complexity.)
A good explanation of vertical change is seen here: http://web.sxu.edu/rogers/sys/generations.html
And a nice, yet incomplete, illustration of horizontal change it here: http://oreilly.com/news/graphics/prog_lang_poster.pdf

Related

What does it mean if someone refers to something as BootStrap?

I hear the term "BootStrap" thrown around a lot, but I'm not really sure what it refers to. I know there is a bootstrap CSS, but what exactly does the term mean?
Literally, a bootstrap is a tab on the sides or back of boots that helps you to pull them on. Putting on your shoes or boots is usually the last step of getting dressed; similarly, in programming it's been applied to the initialization or start-up step of a program.
See also the Wikipedia entry for bootstrapping:
Bootstrapping or booting refers to a group of metaphors which refer to a self-sustaining process that proceeds without external help.
[.. in Software Loading] booting is the process of starting a computer, specifically in regards to starting its software. The process involves a chain of stages, in which at each stage a smaller simpler program loads and then executes the larger more complicated program of the next stage. It is in this sense that the computer "pulls itself up by its bootstraps", i.e. it improves itself by its own efforts
[.. in Software Development] bootstrapping can also refer to the development of successively more complex, faster programming environments. The simplest environment will be, perhaps, a very basic text editor (e.g., ed) and an assembler program. Using these tools, one can write a more complex text editor, and a simple compiler for a higher-level language and so on, until one can have a graphical IDE and an extremely high-level programming language.
A shoehorn is another means to help you don footwear but it's idiomatically come to mean cramming something into a tight space.
In computer science Bootstrap (or more commonly "Boot") generally refers to the setup/start/initialization step of a process. It can mean many things depending on the context: starting a physical machine, setting up variables and services for an application to use, or even laying the css groundwork for a website to implement.
Bootstrapping let you create your own most complex design by just minimal configuration, rather than develop it from the scratch.

Use cases for self-modifying code?

On a Von Neumann architecture, program and data are both stored in memory, so a program can modify itself. Is this useful for a programmer? Could you give some examples?
Metamorphism
One (questionable) use case that comes to my mind is metamorphic computer viruses. These are malicious pieces of software that conceal themselves from signature based detection by rewriting their own machine code to an semantically equivalent representation that looks different.
Trampolining
Another (more complex, but also more common) use case is trampolining, a technique based on dynamic code generation to solve certain problems with nested function calls.
JIT compilation
The most common usage of dynamic code generation that I can think of is JIT (just-in-time) compilation. Modern languages like .NET or Java are not compiled into native machine code, but into some kind of intermediate language (called bytecode). This bytecode is then interpreted when the program is executed (by a virtual machine written for the target architecture). At the same time, a background process checks which parts of the code are executed very often. These parts then have a good chance of being dynamically compiled into native machine language for maximum performance. All this happens during the run time of the program!
Security implications
One thing to keep in mind is that the possibility to interpret data as code is useful for exploiting security holes in computer software, which is why the trend in modern hardware and operating systems is to enable and, if possible, even enforce the separation of code and data (also see NX bit and DEP).
I can best answer this by referring you to an answer to a similar (exceptionally well written and answered) question, also on StackOverflow - Homoiconic and "unrestricted" self modifying code + Is lisp really self modifying?. The answer focuses on Lisp, a family languages known for taking "code is data" to the next level, and explores the uses of that in AI.

How is Ada a 'safety critical' language?

I tried googling and read some snippets online. Why is Ada a "safety critical" language? Some things I notice are
No pointers
Specify a range (this type is an integer but can only be 1-12)
Explicitly state if a function parameter is out or in/out
Range-based loops (To avoid bound errors or bound checking)
The rest of the syntax I either didn't understand or didn't see how it helps it to be 'safety critical'. These are some points but I don't see the big picture. Does it have design-by-contract that I am not seeing? Does it have rules that make it harder for code to compile (and if so what are some?) Why is it a 'safety critical' language?
Well, this is pretty simple. The reason there's a lot of Ada sytax that doesn't seem to have much of anything to do with making the language "safety-critical" (whatever that means for a language) is that this was not Ada's design goal. It was designed to be a general-purpose compiled system's programming language, sufficently capable that the U.S. Department of Defense could get rid of all its little one-use languages it had to support all over the place.
The fact that the end result is a language that is rather useful for safety-critical applications was just a happy side-effect of the fact that the language was very well-designed with military applications (where lives are often staked on the software's reliability) in mind.
Surprisingly few other modern languages had support for building reliable software as a design goal. Most seem to be cooked up by a lone "genius" hacker, with the chief goal being the ability to facilitate cranking out lots of code quickly, perhaps in some new way that hacker favors.
All of those are good for safety-critical application; but consider also the ability to assign a layout (down to the bits) and the ability to [optionally] specify that such a record can ONLY be at a certain location (useful for things like video-memory mappings).
Consider that a lot of safety-critical applications are also without standard (in the senses both of "wide-spread" and of forward-comparability) interfaces; example: nuclear reactors, rocket engines (the engineering itself differs from generation to generation*), models-of-aircraft.
The upcoming Ada 2012 standard DOES have actual contracts, in the form of pre- and post-conditions; example (taken from http://www2.adacore.com/wp-content/uploads/2006/03/Ada2012_Rational_Introducion.pdf):
generic
type Item is private;
package Stacks is
type Stack is private;
function Is_Empty(S: Stack) return Boolean;
function Is_Full(S: Stack) return Boolean;
procedure Push(S: in out Stack; X: in Item)
with
Pre => not Is_Full(S),
Post => not Is_Empty(S);
procedure Pop(S: in out Stack; X: out Item)
with
Pre => not Is_Empty(S),
Post => not Is_Full(S);
Stack_Error: exception;
private
-- Private portion.
end Stacks;
Also, another thing that gets glossed over, is the ability to exclude Null from your Access/pointer types; this is useful in that you can a) specify that exclusion in your subprogram parameters, and b) to streamline your algorithm [since you don't have to check for null at every instance-of-use], and c) letting your exception handler handle the (I assume) exceptional circumstance of a Null.
* The Arianne 5 disaster occurred precisely because the management disregarded this fact and had the programmers use the incorrect specifications: that of the Arianne 4.
AdaCore has a nice presentation of various safety features of Ada 2005 here:
http://www.adacore.com/knowledge/technical-papers/safe-secure/
The US Government and industry also ran studies on program reliability a long time ago that compared languages. I couldn't find one very quickly as the sites are all old (!) but here's a quote from DDCI's website you: " In studies conducted during the eighties Ada consistently outperformed established programming languages like Pascal, Fortran, and C. In the nineties, Ada continues to surpass C++ in performance evaluations measuring capability, efficiency, maintenance, risk, and lifecycle cost."
Lists reasons they used it on Commanche project in link below. I'll add that the platform implementations have been around for a LONG time and stayed stable. Like a source in article said, maintenance is where majority of costs come in. We've seen the modern contenders .NET and Java change like crazy. Long-term stability of Ada is better for safety-critical apps which are often fielded for long periods (sometimes decades).
http://www.ddci.com/displayNews.php?fn=programs_rah66.php
Another benefit is Ada was designed for cross-language development. I keep seeing in the news people talking about how .NET and JVM are innovative b/c they let you mix the "right tools for the job" into one system. Ada's had that ability for a long time. It's common for apps to be a mix of Ada, C, C++, assembler, etc. (MULTOS CA comes to mind.) And they still function well.
It hasn't been static either. They keep updating the language, most recently in 2012. Its portability allowed it to run on both JVM and .NET too for people that want the libraries or have plenty existing code on those. There's also Ada development tools and robust runtimes for many OS's and RTOS's from IBM, Aonix, AdaCore, and Green Hills.
Last benefit: if it will compile, it will work. Usually.
I know this is late, but it's an example I recently encountered. I wrote the following C++ function that worked fine in -O0:
size_t get_index(const Monomial & t, const Monomial & u) {
get_index(t, u.log()); // forgot to type "return" first...
}
This actually compiles, and while it might emit a warning if you're lucky to have a decent compiler, you're not likely to see it when it's one of a lot of programs being compiled. Miraculously, it ran just fine when I compiled it with -O0. But when I compiled it in -O3 it crashed every time, and for the life of me I couldn't figure out why, because I didn't see the warning (if it even appeared). Imagine debugging that when you think you imagine a return there simply because you know your intent.
Likewise, back when I was learning C I frequently made this mistake:
int a;
scanf("%d", a); /* left out & before the a */
Using int's for pointers and vice versa is considered normal programming practice in C, so much so that compilers 25 years ago didn't even bother to emit a warning. Heck, that was a feature of C, not a bug. (See, for instance, Brian Kernaghan's "Why Pascal is not my favorite Language.") And of course back then home computer OS's didn't have memory protection; if you were lucky the computer wasn't writing to hard disk when it reset.
These kinds of mistakes won't even compile in Ada. Functions have to return a value, and you cannot accidentally use an Integer in place of an access Integer (i.e., pointer to integer).
And that's just the start!
Ada has superior support for real-time http://www.adaic.org/resources/add_content/standards/05rat/html/Rat-1-3-4.html . This allows the programmer to implement a greater degree of determinism rather than worry about the technical details of the programming language itself. While many of the run-time features supported by Ada can be achieved in C with some deep understanding of things, Ada achieves most real-time features very well since it's standardized. Ada even has a Ravenscar profile http://en.wikipedia.org/wiki/Ravenscar_profile and SPARK a computer language where "highly reliable operation is essential" is based on a subset of Ada 83 and 95 http://en.wikipedia.org/wiki/SPARK_%28programming_language%29. My guess is that SPARK does not have a version for later Ada versions b/c it is too early to tell how safe the newer versions really are. It's also mentioned in the latter article that Ada can be optimized for speeds rivaling C which would be important for real-time applications that relied on precise control during rapidly changing events. There are many built in standard features for real-time control which are obviously important for a 'safety critical' language.

Fast interpreted language for memory constrained microcontroller

I'm looking for a fast interpreted language for a microcontroller.
The requirements are:
should be fast (not crucial but would be nice)
should be light on data memory (small overhead <8KB, excludes program variable space)
preferably would be small in program size and the language would be compact
preferably, human readable (for example, BASIC)
Thanks!
Some AVR interpreters:
http://www.cqham.ru/tbcgroup/index_eng.htm
http://www.jcwolfram.de/projekte/avr/chipbasic2/main.php
http://www.jcwolfram.de/projekte/avr/chipbasic8/main.php
http://www.jcwolfram.de/projekte/avr/main.php
http://code.google.com/p/python-on-a-chip/
http://www.avrfreaks.net/index.php?module=Freaks%20Academy&func=viewItem&item_id=688&item_type=project
http://www.avrfreaks.net/index.php?module=Freaks%20Academy&func=viewItem&item_id=626&item_type=project
http://www.avrfreaks.net/index.php?module=Freaks%20Academy&func=viewItem&item_id=460&item_type=project
This is a bit generic: there are many kinds of Microcontrollers, and thanks to technologies like Jazelle, it is possible to run hardware-accelerated Java on Microcontrollers. (if... your microcontroller supports it)
For a generic answer: Forth is commonly referenced. But really, you need to be far more specific with your question.
Micro-controllers come in a vast variety of architectures. There are small 8-bit families, 32-bit families with simple architectures and 32-bit families with MMU support, suitable for running a modern OS. If you don't state which family you are targeted at, it is impossible to answer your question.
Anyway, for 8-bit families the best you can get is a BASIC variant. See Bascom for example. Note that this would be a compiler version of the "interpreted" language. If you actually want to have a runtime or an interpreter that will execute your code, then you most probably need to install an operation system in your microcontroller.
There were a variety of interpreted languages for small micros back in the late 1970's and 1980's. They seem to have mostly fallen out of fashion. I'd like to have a p-code based C compiler for the PIC18 that could coexist nicely with my other C compiler; for much of my code I'd be willing to accept a 100-fold slowdown for a 50% space reduction (so long as I could keep the important stuff in native code). I would think that would be achievable, but I'm not about to implement such a thing from scratch myself.

Is a functional language a good choice for a Flight Simulator? How about Lisp?

I have been doing object-oriented programming for a few years now, and I have not done much functional programming. I have an interest in flight simulators, and am curious about the functional programming aspect of Lisp. Flight simulators or any other real world simulator makes sense to me in an object-oriented paradigm.
Here are my questions:
Is object oriented the best way to represent a real world simulation domain?
I know that Common Lisp has CLOS (OO for lisp), but my question is really about writing a flight simulator in a functional language. So if you were going to write it in Lisp, would you choose to use CLOS or write it in a functional manner?
Does anyone have any thoughts on coding a flight simulator in lisp or any functional language?
UPDATE 11/8/12 - A similar SO question for those interested -> How does functional programming apply to simulations?
It's a common mistake to think of "Lisp" as a functional language. Really it is best thought of as a family of languages, probably, but these days when people say Lisp they usually mean Common Lisp.
Common Lisp allows functional programming, but it isn't a functional language per se. Rather it is a general purpose language. Scheme is a much smaller variant, that is more functional in orientation, and of course there are others.
As for your question is it a good choice? That really depends on your plans. Common Lisp particularly has some real strengths for this sort of thing. It's both interactive and introspective at a level you usually see in so-called scripting languages, making it very quick to develop in. At the same time its compiled and has efficient compilers, so you can expect performance in the same ballpark as other efficient compilers (with a factor of two of c is typical ime). While a large language, it has a much more consistent design than things like c++, and the metaprogramming capabilities can make very clean, easy to understand code for your particular application. If you only look at these aspects
common lisp looks amazing.
However, there are downsides. The community is small, you won't find many people to help if that's what you're looking for. While the built in library is large, you won't find as many 3rd party libraries, so you may end up writing more of it from scratch. Finally, while it's by no means a walled garden, CL doesn't have the kind of smooth integration with foreign libraries that say python does. Which doesn't mean you can't call c code, there are nice tools for this.
By they way, CLOS is about the most powerful OO system I can think of, but it is quite a different approach if you're coming from a mainstream c++/java/c#/etc. OO background (yes, they differ, but beyond single vs. multiple inh. not that much) you may find it a bit strange at first, almost turned inside out.
If you go this route, you are going to have to watch for some issues with performance of the actual rendering pipeline, if you write that yourself with CLOS. The class system has incredible runtime flexibility (i.e. updating class definitions at runtime not via monkey patching etc. but via actually changing the class and updating instances) however you pay some dispatch cost on this.
For what it's worth, I've used CL in the past for research code requiring numerical efficiency, i.e. simulations of a different sort. It works well for me. In that case I wasn't worried about using existing code -- it didn't exist, so I was writing pretty much everything from scratch anyway.
In summary, it could be a fine choice of language for this project, but not the only one. If you don't use a language with both high-level aspects and good performance (like CL has, as does OCaml, and a few others) I would definitely look at the possibility of a two level approach with a language like lua or perhaps python (lots of libs) on top of some c or c++ code doing the heavy lifting.
If you look at the game or simulator industry you find a lot of C++ plus maybe some added scripting component. There can also be tools written in other languages for scenery design or related tasks. But there is only very little Lisp used in that domain. You need to be a good hacker to get the necessary performance out of Lisp and to be able to access or write the low-level code. How do you get this knowhow? Try, fail, learn, try, fail less, learn, ... There is nothing but writing code and experimenting with it. Lisp is really useful for good software engineers or those that have the potential to be a good software engineer.
One of the main obstacles is the garbage collector. Either you have a very simple one (then you have a performance problem with random pauses) or you have a sophisticated one (then you have a problem getting it working right). Only few garbage collectors exist that would be suitable - most Lisp implementations have good GC implementations, but still those are not tuned for real-time or near real-time use. Exceptions do exist. With C++ you can forget the GC, because there usually is none.
The other alternative to automatic memory management with a garbage collector is to use no GC and manage memory 'manually'. This is used by some (even commercial) Lisp applications that need to support some real-time response (for example process control expert systems).
The nearest thing that was developed in that area was the Crash Bandicoot (and also later games) game for the Playstation I (later games were for the Playstation II) from Naughty Dog. Since they have been bought by Sony, they switched to C++ for the Playstation III. Their development environment was written in Allegro Common Lisp and it included a compiler for a Scheme (a Lisp dialect) variant. On the development system the code gets compiled and then downloaded to the Playstation during development. They had their own 3d engine (very impressive, always got excellent reviews from game magazines), incremental level loading, complex behaviour control for lots of different actors, etc. So the Playstation was really executing the Scheme code, but memory management was not done via GC (afaik). They had to develop all the technology on their own - nobody was offering Lisp-based tools - but they could, because their were excellent software developers. Since then I haven't heard of a similar project. Note that this was not just Lisp for scripting - it was Lisp all the way down.
One the Scheme side there is also a new interesting implementation called Ypsilon Scheme. It is developed for a pinball game - this could be the base for other games, too.
On the Common Lisp side, there have been Lisp applications talking to flight simulators and controlling aspects of them. There are some game libraries that are based on SDL. There are interfaces to OpenGL. There is also something like the 'Open Agent Engine'. There are also some 3d graphics applications written in Common Lisp - even some complex ones. But in the area of flight simulation there is very little prior art.
On the topic of CLOS vs. Functional Programming. Probably one would use neither. If you need to squeeze all possible performance out of a system, then CLOS already has some overheads that one might want to avoid.
Take a look at Functional Reactive Programming. There are a number of frameworks for this in Haskell (don't know about other languages), most of which are based around arrows. The basic idea is to represent relationships between time-varying values and events. So for example you would write (in Haskell arrow notation using no particular library):
velocity <- {some expression of airspeed, heading, gravity etc.}
position <- integrate <- velocity
The second line declares the relationship between position and velocity. The <- arrow operators are syntactic sugar for a bunch of library calls that tie everything together.
Then later on you might say something like:
groundLevel <- getGroundLevel <- position
altitude <- getAltitude <- position
crashed <- liftA2 (<) altitude groundLevel
to declare that if your altitude is less than the ground level at your position then you have crashed. Just as with the other variables here, "crashed" is not just a single value, its a time-varying stream of values. That is why the "liftA2" function is used to "lift" the comparison operator from simple values to streams.
IO is not a problem in this paradigm. Inputs are time varying values such as joystick X and Y, while the image on the screen is simply another time varying value. At the very top level your entire simulator is an arrow from the inputs to the outputs. Then you call a "run" function that converts the arrow into an IO action that runs the game.
If you write this in Lisp you will probably find yourself creating a bunch of macros that basically re-invent arrows, so it might be worth just finding out about arrows to start with.
I don't know anything about flight sims, and you haven't listed anything in particular they consist of, so this is mostly a guess about writing a FS in Lisp.
Why not:
Lisp excels at exploratory programming. I think that since FSs been around so long, and there are free and open-source examples, that it would not benefit as much from this type of programming.
Flight sims are mostly (I'm guessing) written in static, natively compiled languages. If you're looking for pure runtime performance, in Lisp this tends to mean type declarations and other not-so-Lispy constructs. If you don't get the performance you want with naive approaches, your optimized-Lisp might end up looking a lot like C, and Lisp isn't as good at C at writing C.
A lot of a FS, I'm guessing, is interfacing to a graphics library like OpenGL, which is written in C. Depending on how your FFI / OpenGL bindings are, this might, again, make your code look like C-in-Lisp. You might not have the big win that Lisp does in, say, a web app (which consists of generating a tree structure of plain text, which Lisp is great at).
Why:
I took a glance at the FlightGear source code, and I see a lot of structural boilerplate -- even a straight port might end up being half the size.
They use strings for keys all over the place (C++ doesn't have symbols). They use XML for semi-human-readable config files (C++ doesn't have a runtime reader). Simply switching to native Lisp constructs here could be big win for minimal effort.
Nothing looks at all complex, even the "AI". It's simply a matter of keeping everything organized, and Lisp will be great at this because it'll be a lot shorter.
But the neat thing about Lisp is that it's multi-paradigm. You can use OO for organizing the "objects", and FP for computation within each object. I say just start writing and see where it takes you.
I would first think of the nature of the simulation.
Some simulations require interaction like a flight simulator. I don't think functional programming may be a good choice for an interactive (read: CPU intensive/response-critical) applicaiton. Of course, if you have access to 8 PS3's wired together with Linux, you'll not care too much about performance.
For simulations like evolutionary/genetic programming where you set it up and let 'er rip, a functioonal lauguage may help model the problem domain better than an OO language. Not that I'm an expert in functional programming but the ease of coding recursion and the idea of lazy evaluation common in functional languages seems to me a good fit for the 'let her rip' sort of sims.
I wouldn't say functional programming lends itself particularly well to flight simulation. In general, functional languages can be very useful for writing scientific simulations, though this is a slightly specialised case. Really, you'd probably be better off with a standard imperative (preferably OOP) language like C++/C#/Java, as they would tend to have the better physics libraries as well as graphics APIs, both of which you would need to use very heavily. Also, the OOP approach might make it easier to represent your environment. Another point to consider is that (as far as I know) the popular flight simulators on the market today are written pretty much entirely in C++.
Essentially, my philosophy is that if there's no particularly good reason that you should need to use functional paradigms, then don't use a functional language (though there's nothing to stop you using functional constructs in OOP/mixed languages). I suspect you're going to have a lot less painful of a development process using the well-tested APIs for C++ and languages more commonly associated with game development (which has many commonalities with flight sim). Now, if you want to add some complex AI to the simulator, Lisp might seem like a rather more obvious choice, though even then I wouldn't at all jump for it. And finally, if you're really keen on using a functional language, I would recommend you go with one of the more general purpose ones like Python or even F# (both mixed imperative-functional languages really), as opposed to Lisp, which could end up getting rather ugly for such a project.
There are a few problems with functional languages, and that is they don't mesh well with state, but they do go well with process. So in a way it could be said they are action oriented. This means you'll be wasting your time simulating a plane, what you want to do is simulate the actions of flying a plane. Once you grim that you can probably get it to work.
Now as side point, haskell wouldn't be good IMHO, because it's too abstract for a "game", this sort of app is all about Input/Output, but Haskell is about avoiding IO, so it'll become a monad nightmare, and you'll be working against the language. Lisp is a better choice, or Lua or Javascript, they are also functional, but not purely functional, so for your case try Lisp. Anyways in any of these languages your graphics will be C or C++.
A serious issue however is there is very little documentation, and less tutorials about Functional languages and "games", of course scientific simulations is academically documented but those papers are quite dense, if you succeed maybe you could write you experiences, for others as it's a rather empty field right now

Resources