Fast interpreted language for memory constrained microcontroller - microcontroller

I'm looking for a fast interpreted language for a microcontroller.
The requirements are:
should be fast (not crucial but would be nice)
should be light on data memory (small overhead <8KB, excludes program variable space)
preferably would be small in program size and the language would be compact
preferably, human readable (for example, BASIC)
Thanks!

Some AVR interpreters:
http://www.cqham.ru/tbcgroup/index_eng.htm
http://www.jcwolfram.de/projekte/avr/chipbasic2/main.php
http://www.jcwolfram.de/projekte/avr/chipbasic8/main.php
http://www.jcwolfram.de/projekte/avr/main.php
http://code.google.com/p/python-on-a-chip/
http://www.avrfreaks.net/index.php?module=Freaks%20Academy&func=viewItem&item_id=688&item_type=project
http://www.avrfreaks.net/index.php?module=Freaks%20Academy&func=viewItem&item_id=626&item_type=project
http://www.avrfreaks.net/index.php?module=Freaks%20Academy&func=viewItem&item_id=460&item_type=project

This is a bit generic: there are many kinds of Microcontrollers, and thanks to technologies like Jazelle, it is possible to run hardware-accelerated Java on Microcontrollers. (if... your microcontroller supports it)
For a generic answer: Forth is commonly referenced. But really, you need to be far more specific with your question.

Micro-controllers come in a vast variety of architectures. There are small 8-bit families, 32-bit families with simple architectures and 32-bit families with MMU support, suitable for running a modern OS. If you don't state which family you are targeted at, it is impossible to answer your question.
Anyway, for 8-bit families the best you can get is a BASIC variant. See Bascom for example. Note that this would be a compiler version of the "interpreted" language. If you actually want to have a runtime or an interpreter that will execute your code, then you most probably need to install an operation system in your microcontroller.

There were a variety of interpreted languages for small micros back in the late 1970's and 1980's. They seem to have mostly fallen out of fashion. I'd like to have a p-code based C compiler for the PIC18 that could coexist nicely with my other C compiler; for much of my code I'd be willing to accept a 100-fold slowdown for a 50% space reduction (so long as I could keep the important stuff in native code). I would think that would be achievable, but I'm not about to implement such a thing from scratch myself.

Related

Radeon HD 4850 and OpenCL: will cl_khr_fp64 work on this videocard?

This videocard (Radeon HD 4850) conforms only with OpenCL 1.0, per AMD Compatibility table. I need some hardware to conduct intensive financial calculations with doubleN types (no floats at all!). According to this cardtable, this card is able to work with double types. Now I have the possibility to buy it at quite an attractive price.
I'd greatly appreciate if an answerer has real experience in working with this card for OpenCL with fp64 extension. Of course, if there are problems with this card, please put two lines here.
Thank you and sorry for my English.
I haven't used this card with DP before, but if the spec says it is supported, then it's worth a try.
In my opinion, you should go with a newer model card though. There are a lot of cheap cards out that will outperform the 4850, and they will support some new features as well.
This card supports double precision but the 4xxx series doesn't include local memory in the chip. As the standard mandates local memory support it is emulated with global memory and very slow. Many algorithms require local memory for obtaining a good speed-up. So, a newer card 5xxx and higher is a lot better.
In addition, some combinations of older cards/older SDK versions only support double precision through the cl_amd_fp64 extension (not the official cl_khr_fp64 extension) because of some small things from the standard that are not supported. For the most part, this doesn't matter much except that you need to change the extension name in your code to make it work with doubles.
As a general tip, I would try to avoid the 4xxx series if you intend to make serious GPGPU development. Keep in mind also, that the newer 7xxx series it is much more optimized for GPU computations than both the 5xxx and 6xxx series closing much of the gap with NVIDIA cards. So, if you can, try to aim for a 7xxx with double precision support.

Are there any current non-Harvard architecture microcontrollers?

I have used and like the Atmel ATMEGA and ATTINY series microcontrollers, and think them quite good. One thing I am not terribly fond of though is the fact that they (and Microchip PIC uC family also) are all Harvard machines, meaning I can't really put external memory to use or execute out of RAM, only the flash.
While there are obvious advantages to this design, it makes it technically very difficult to do things like FORTH using an AVR or PIC. (I know there is at least one implementation, but it does not work like a normal FORTH and will wear out the flash rather rapidly)
FORTH was originally created for interactive machine control type systems where lots of flexibility was needed, so things like the Z80 or 6809 were used as microcontrollers with the control program executing out or RAM or some other storage device.
Does anyone know of current devices of similar complexity (preferably available in DIP packages) to the AVR/PIC that are von Neumman machines?
In addition to Freescale processors (that starblue has already pointed out), the Texas Instrument MSP430 family uses von Neumann architecture. However only the smallest ones are available in a DIP package.
UPDATE to include PIC32:
In my original post, I had forgotten that PIC32 microcontrollers have always been able to execute out of RAM, as demonstrated by this code example;and now Microchip has come out with the new PIC32MZ line of microcontrollers, with up to 2 MB of Flash and 512K of RAM which makes them feasible for fairly large RAM-based programs. Unfortunately none of them chips are available in DIP packages.
However Olimex, sort of the Bulgarian equivalent of SparkFun and Adafruit, has a PIC32-HMZ144 development board for $21.95 EUR, which is about $24. This is a smoking hot deal since the processor alone costs over $12 at Digi-Key. (There are other boards available from US suppliers from around $50 and up.)
The original PIC32MX line has twenty variants in 28-pin DIP packages, but they are limited to a maximum of 64K of RAM, still useful for some projects.
Farnell has a nice search function that let's you search for microcontrollers in DIP packages. Though you'll have figure out which families are non-Harvard by looking at the data sheets.
Take a look at the 68K ones and the HCS08.
Update: In the meantime some ARM Cortex-M controllers in DIP packages have become available, the LPC810M021FN8 and the LPC1114FN28 from NXP.
You might want to peruse the designs available at the OpenCores project. That is an open source project devoted to CPU core designs implemented in VHDL, Verilog, and similar FPGA design languages. There are complete and respectable implementations of classic 8-bit CPUs such as the 8080, 6502, and 8051. The 6502 I linked to claims to be cycle-accurate compared to the original chip. Others are functionally complete, but often have more modern buses and signals.
They won't (I think) be available in DIP packages, but you can always find breakout boards.
The designs are all open source, under a wide variety of licenses.
You may also have a look at the Zilog eZ80. Since they're binary-compatible with the old Z80, you should be able find a FORTH implementation that runs on them, but you'd probably need to run it on top of good old CP/M :)
Also, these are the only ones that I found that have the memory bus accessible from the outside, i.e. allow code execution from external memory.
The arm based ones, even the cortex-m3 claims to be harvard, but you can load programs into data ram and execute from that ram. it is really not harvard. Other arms are normally not harvard, some have external memory interfaces you can use to expand the internal resources.
This is actually not a question, but more of a related query. Why would you go to von-neumann in a microcontroller if the previous generation was harvard? Isnt it all win-win in terms of performance? other than complexity (which if the original PIC's can handle it, should not be that great) what are the downsides of having Harvard architecture?
The new Kinetis line of microcontrollers from Freescale puts an ARM Cortex-M4 inside a microcontroller package, and program code can be located anywhere in addressable space (RAM or FLASH, or even Flex Memory.)
The Kinetis Solution Advisor is a powerful selector guide that can help you find the micro you want. Memory from 32kB to 1MB, all the peripherals you could want, and pricing from under a dollar to around 10.

OpenCL vs. DirectCompute?

I'm looking for comparisons between OpenCL and DirectCompute, but I haven't found anything. OpenCL's advantages of being cross-platform and having a wider range of supported GPUs don't matter to me. I'm fine with coding on Windows against DX11 GPUs only. Assuming that, what are the pros and cons of each API?
I know this question was raised before, but I'm looking for more details.
I'm not interested in CUDA, since I don't want to restrict myself to only Nvidia hardware.
Probably the biggest difference for a coder is that DirectCompute is programmed by a language which is similar to HLSL, and OpenCL is programmed via a C-like language.
Another difference to consider is that, generally, for commodity level GPUs, the DirectX support is better (faster and less buggy) than OpenGL support on Windows. This may translate to more stable support for DirectCompute, but really, this is just speculation.
Well the major advantage of OpenCL is that it is not just limited to graphics cards. You can make use of your multicore CPU, Graphics Card and potentially any number of other hardware acceleration devices (DSPs etc) all from the same program.
I'm not sure if DirectCompute allows that freedom.
The OpenCL cross-platform-ness is not just a detail, as the host code (the one calling the OpenCL API and submitting kernels) can itself be cross-platform (see link text, link text...).
Write once, run on any GPGPU, anywhere.
Otherwise the OpenCL tooling is really getting better, with an ATI Stream plugin for Visual Studio, the NVidia & ATI SDKs that contains tons of samples, etc...
Another option now is C++ AMP which gives you modern C++ syntax without a need for a seperate compiler while still preserving hardware portability. Please follow links from here for more info and feel free to post questions as you have them: http://blogs.msdn.com/b/nativeconcurrency/archive/2011/09/13/c-amp-in-a-nutshell.aspx
I use OpenCL because i can easily port my App to Linux but with DirectCompute this is not possible.
I think also that the performance of the OpenCL implementation will increase with time (that it comes at the same Level like CUDA for NVidia Cards) and also that the (driver)bugs will (hopefully ;) ) be eliminated with time.

lowest level language until asp.net?

it's assembler right? can someone please point out the progression that we've had in programming languages since assembler to the days of asp.net, namely the chronological order of languages?
Here's a wiki timeline of all programming languages.
I would include a FTA table, but the list is very robust and extensive.
And also, the lowest language you ever get to is assembly (aside from straight up issuing machine instructions), regardless of what other language is built on top (including ASP.NET). Other languages are really just abstractions on top of assembly. In fact, ASP.NET gets compiled into IL (Intermediate Language) code, which then get's JITed into assembly. Assembly is as close to the metal as you're going to get.
To be pedantic, "assembler" is not actually a language (any more than "compiler" is;-) -- rather, it's a program that takes a source file in "assembly language" and emits binary machine code. The binary machine code can be said to be lower-level than the assembly language, since the latter allows use of some symbols and often includes a macro processing ability as well.
"Below" binary machine code, there may be other levels, known as "microcode" (but there might not be -- the CPU might be implemented entirely in real hardware, without any microprogramming aspect). That might be relevant only if the system's architecture allowed programmers to alter the microcode, especially by adding to it, etc -- there have been machines that did that, but I don't believe any currently commercialized CPU does. So you probably don't have to care about that (and the by-now-esoteric distinctions between vertical and horizontal microcode, etc, etc;-).
Programming languages are just ways to assemble solutions to computing problems.
The argument is "assembled out of what?"
From that point of view, I'd suggest the following evolutionary curve:
Napier's Bones
Babbage's difference engine
Jacquard (card) looms
(Conceptual) Abstract Turing machines/Post Systems/Church's calculus
Relay Computers (Aiken?)
Vacuum tubes as switching elements (Eniac)
Transistor-based computers
Microprogrammed machines
Integrated Circuits
Large Scale Circuits
with "assembler" being the programming language used to
put together solutions consisting of instructions for
real machines starting with the vacuum tube systems.
(I'm not sure the relay machines actually had assemblers).
Programming langauges are just ways to put together high
level commands that reduce in effect to assembler instructions.
There are two different dimensions to consider here, what I'd call vertical growth (languages build up over time from one generation to the next) and horizontal growth (syntactic improvements and reduction in complexity.)
A good explanation of vertical change is seen here: http://web.sxu.edu/rogers/sys/generations.html
And a nice, yet incomplete, illustration of horizontal change it here: http://oreilly.com/news/graphics/prog_lang_poster.pdf

Functional programming in nuclear plants?

After reading this question I just wondered whether it would be a good idea to use Haskell (or other functional programming languages) in mission critical industries.
Apart from Erlang, most languages followed imperative/design-by-contract paradigms (Ada, Eiffel, C++).
But what about the functional ones?
The resulting code would be easily maintainable, stable and lots of potential bugs could be eliminated by their strict type systems at compile-time.
Or is lazy evaluation more dangerous than helpful? Are there other security drawbacks?
I think you could. The language seems well suited for such situations, assuming you trust the compiler enough to use it in mission critical situation.
Remember that in mission critical situations it is not only your code that is under scrutiny, but all other components too. That includes compiler (Haskell compiler is not among the easiest ones to code review), appropriate certified hardware that runs the software, appropriate hardware that compiles your code, hardware that bootstraps the compilation of the compiler that will compile your code, hell - even wires that connect that all to the power grid and frequency of voltage change in the socket.
If you are interested in looking at mission critical software quality, I suggest looking at NASA software quality procedures. They are very strict and formal, but well these guys throw millions of dollars in space in hope it will survive pretty rough conditions and will make it to Mars or wherever and then autonomously operate and send some nice photos of Martians back to earth.
So, there you go: Haskell is good for mission critical situations, but it'd be an expensive process to bootstrap its usage there.

Resources