Why doesn't julia support something like switch-case - julia

Julia doesn't support something like a switch-case a control structure, at least according to the current documentation of Control Flow?
switch case is a common Flow Control in imperative or object oriented languages, why not in julia?
Language supporting switch-case (not completed)
C/C++
Java
Pascal
PHP
Javascript
Typescript
Octave

The basic philosophy of julia is to provide most functionality as packages and keep the core (Base) ultra-lean. So the answer to "why doesn't Julia support X" is usually "Julia supports X via package Y". In this case, the Match.jl provides a switch-case like structure that is very powerful.
There is also a Switch.jl package that is very close to C's switch, but it is not actively maintained.

There is extensive discussion about including this in julia language. It will probably happen at some point but maybe not until after v1.0.
See here for the main discussion (includes links to other discussions): https://github.com/JuliaLang/julia/issues/18285
& this one is informative too (but now closed in favour of the above):
https://github.com/JuliaLang/julia/issues/5410
edit
Also it is worth mentioning that julia need not offer syntax for switch-case because it might be best (in terms of features) to have it implemented as a macro (i.e. via metaprogramming) which need not be included in Base julia.

Related

What is more in the spirit of the Julia language and philosophy?

I recently started programming in Julia for research purposes. Going through it I started loving the syntax, I positively experienced the community here in SO and now I am thinking about porting some code from other programming languages.
Working with highly computational expensive forecasting models, it would be nice to have them all in a powerful modern language as Julia.
I would like to create a project and I am wondering how I should design it. I am concerned both from a performance and a language perspective (i.e.: Would it be better to create modules – submodules – functions or something else would be preferred? Is it better off to use dictionaries or custom types?).
I have looked at different GitHub projects in my field, but I haven't really found a common standard. Therefore I am wondering: what is more in the spirit of the Julia language and philosophy?
EDIT:
It has been pointed out that this question might be too generic. Therefore, I would like to focus it on how it would be better structuring modules (i.e. separate modules for main functions and subroutines versus modules and submodules, etc.). I believe this would be enough for me to have a feel about what might be considered in the spirit of the Julia language and philosophy. Of course, additional examples and references are more than welcome.
The most you'll find is that there is an "official" style-guide. The rest of the "Julian" style is ill-defined, but there are some ways to heuristically define it.
First of all, it means designing the software around multiple dispatch and the type system. A software which follows a Julian design philosophy usually won't be defining a bunch of functions like test_pumpkin and test_pineapple, instead it will use dispatches on test for types Pumpkin and Pineapple. This allows for clean/understandable code. It will break tasks up into small type-stable functions which will allow for good performance. It likely will also be written very generically, allowing the user to use items that are subtypes of AbstractArray or Number, and using the power of dispatch to allow their software to work on numbers they've never even heard of. (In this respect, custom types are recommended over dictionaries when you need performance. However, for a type you have to know all of the fields at the beginning, which means some things require dictionaries.)
A software which follows a Julian design philosophy may also implement a DSL (Domain-Specific Language) to allow a simpler interface to the user. Instead of requiring the user to conform to archaic standards derived from C/Fortran, or write large repetitive items and inputs, the package may provide macros to allow the user to more heuristically define the problem for the software to solve.
Other items which are part of the Julian design philosophy are up for much debate. Is proper Julia code devectorized? I would say no, and the loop fusing broadcast . is a powerful way to write MATLAB-style "vectorized" code and have it be perform like a devectorized loop. However, I have seen others prefer devectorized styles.
Also note that Julia is very different from something like Python where in Julia, you can essentially "build your own standard way of doing something". Since there's no performance penalty for functions/types declared in packages rather than Base, you can build your own Julia world if you want, using macros to define your own "function-like" objects, etc. I mean, you can re-create Java styles in Julia if you wanted.

What are the main differences between CLISP, ECL, and SBCL?

I want to do some simulations with ACT-R and I will need a Common Lisp implementation. I have three Common Lisp implementations available: (1) CLISP [1], (2) ECL [1], and (3) SBCL [1]. As you might have gathered from the links I have read a bit about all three of them on Wikipedia. But I would like the opinion of some experienced users. More specifically I would like to know:
(i) What are the main differences between the three implementations (e.g.: What are they best at? Is any of them used only for specific purposes and might therefore not be suited for specific tasks?)?
(ii) Is there an obvious choice either based on the fact that I will be using ACT-R or based on general reasons?
As this could be interpreted as a subjective question
I checked What topics can I ask about here and What types of questions should I avoid asking? and if I read correctly it should not qualify as forbidden fruit.
I wrote a moderately-sized application and ran it in SBCL, CCL, ECL, CLISP, ABCL, and LispWorks. For my application, SBCL is far and away the fastest, and it's got a pretty good debugger. It's a bit strict about some warnings--you may end up coding in a slightly more regimented way, or turn off one or more warnings.
I agree with Sylwester: If possible, write to the standard, and then you can run your code in any implementation. You'll figure out through testing which is best for your project.
Since SBCL compiles so agressively, once in a while the stacktrace in the debugger is less informative than I'd like. This can probably be controlled with parameters, but I just rerun the same code in one of the other implementations. ABCL has an informative stacktrace, for example, as I recall. (It's also very slow, but if you want real Common Lisp and Java interoperability, it's the only option.)
One of the nice things about Common Lisp is how many high-quality implementations there are, most of them free.
For informal use--e.g. to learn Common Lisp, CCL or CLISP may be a better choice than SBCL.
I have never tried compiling to C using ECL. It's possible that it would beat SBCL on speed for some applications. I have no idea.
CLISP and LispWorks will not handle arbitrarily long argument lists (unless that's been fixed in the last couple of years, but I doubt it). This turned out to be a problem with my application, but would not be a problem for most code.
Doesn't ACT-R come out of Carnegie Mellon? What do its authors use? My guess would be CMUCL or SBCL, which is derived from CMUCL. (I only tried CMUCL briefly. Its interpreter is very slow, but I assume that compiled code is very fast. I think that most people choose SBCL over CMUCL, however.)
(It's possible that this question belongs on Programmers.SE.)
In general, SBCL is the default choice among open-source Lisps. It is solid, well-supported, produces fast code, and provides many goodies beyond what the standard mandates (concurrency primitives, profiling, etc.) Another implementation with similar properties is CCL.
CLISP is more suitable if you're not an engineer, or you want to quickly show Lisp to someone non-engineer. It's a pretty basic implementation, but quick to get running and user-friendly. A Lisp-calculator :)
ECL's major selling point is that it's embeddable, i.e. it is rather easy to make it work inside some C application, like a web-server etc. It's a good choice for geeks, who want to explore solutions on the boundary of Lisp and the outside world. If you're not intersted in such use case I wouldn't recommend you to try it, especially since it is not actively supported, at the moment.
Their names, their bugs and their non standard additions (using them will lock you in)
I use CLISP as REPL and testing during dev and usually SBCL for production. ECL i've never used.
I recommend you test your code with more than one implementation.

Haskell-like libraries for Standard ML

Can anyone recommend a library extension for Standard ML with similar strength as, and preferrably looking like, Prelude for Haskell? Preferrably one that works for many ML implementations, i.e. built with only the existing standard library and itself.
One library I have found is MyLib, which does not resemble Prelude particularly.
The SML/NJ lib contains quite substantial functionality, most of which should be portable to any SML implementation. You can find the manual here.

Compiled dynamic language

I search for a programming language for which a compiler exists and that supports self modifying code. I’ve heared that Lisp supports these features, but I was wondering if there is a more C/C++/D-Like language with these features.
To clarify what I mean:
I want to be able to have in some way access to the programms code at runtime and apply any kind of changes to it, that is, removing commands, adding commands, changing them.
As if i had the AstTree of my programm. Of course i can’t have that tree in a compiled language, so it must be done different. The compile would need to translate the self-modifying commands into their binary equivalent modifications so they would work in runtime with the compiled code.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
Probably there is a reason Lisp is like it is? Lisp was designed to program other languages and to compute with symbolic representations of code and data. The boundary between code and data is no longer there. This influences the design AND the implementation of a programming language.
Lisp has got its syntactical features to generate new code, translate that code and execute it. Thus pre-parsed code is also using the same data structures (symbols, lists, numbers, characters, ...) that are used for other programs, too.
Lisp knows its data at runtime - you can query everything for its type or class. Classes are objects themselves, as are functions. So these elements of the programming language and the programs also are first-class objects, they can be manipulated as such. Dynamic language has nothing to do with 'dynamic typing'.
'Dynamic language' means that the elements of the programming language (for example via meta classes and the meta-object protocol) and the program (its classes, functions, methods, slots, inheritance, ...) can be looked at runtime and can be modified at runtime.
Probably the more of these features you add to a language, the more it will look like Lisp. Since Lisp is pretty much the local maximum of a simple, dynamic, programmable programming language. If you want some of these features, then you might want to think which features of your other program language you have to give up or are willing to give up. For example for a simple code-as-data language, the whole C syntax model might not be practical.
So C-like and 'dynamic language' might not really be a good fit - the syntax is one part of the whole picture. But even the C syntax model limits us how easy we can work with a dynamic language.
C# has always allowed for self-modifying code.
C# 1 allowed you to essentially create and compile code on the fly.
C# 3 added "expression trees", which offered a limited way to dynamically generate code using an object model and abstract syntax trees.
C# 4 builds on that by incorporating support for the "Dynamic Language Runtime". This is probably as close as you are going to get to LISP-like capabilities on the .NET platform in a compiled language.
You might want to consider using C++ with LLVM for (mostly) portable code generation. You can even pull in clang as well to work in C parse trees (note that clang has incomplete support for C++ currently, but is written in C++ itself)
For example, you could write a self-modification core in C++ to interface with clang and LLVM, and the rest of the program in C. Store the parse tree for the main program alongside the self-modification code, then manipulate it with clang at runtime. Clang will let you directly manipulate the AST tree in any way, then compile it all the way down to machine code.
Keep in mind that manipulating your AST in a compiled language will always mean including a compiler (or interpreter) with your program. LLVM is just an easy option for this.
JavaScirpt + V8 (the Chrome JavaScript compiler)
JavaScript is
dynamic
self-modifying (self-evaluating) (well, sort of, depending on your definition)
has a C-like syntax (again, sort of, that's the best you will get for dynamic)
And you now can compile it with V8: http://code.google.com/p/v8/
"Dynamic language" is a broad term that covers a wide variety of concepts. Dynamic typing is supported by C# 4.0 which is a compiled language. Objective-C also supports some features of dynamic languages. However, none of them are even close to Lisp in terms of supporting self modifying code.
To support such a degree of dynamism and self-modifying code, you should have a full-featured compiler to call at run time; this is pretty much what an interpreter really is.
Try groovy. It's a dynamic Java-JVM based language that is compiled at runtime. It should be able to execute its own code.
http://groovy.codehaus.org/
Otherwise, you've always got Perl, PHP, etc... but those are not, as you suggest, C/C++/D- like languages.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
If that's all you're looking for, I'd recommend Python or Ruby. They can both run on their own virtual machines and the JVM and the .Net CLR. Thus, you can choose any runtime you want. Of the two, Ruby seems to have more meta-programming facilities, but Python seems to have more mature implementations on other platforms.

What are the main issues in designing an interpreter for a functional language?

Suppose I want to implement an interpreter for a functional language. I would like to understand the issues involved in doing so and suitable literature that is available. This is a new language that is in early design stages, that is why the question is broad in scope.
For the purpose of this discussion we can assume that the purpose of the language is not important and that its functional features can be changed (even drastically) if it makes a significant difference in the ease of writing an interpreter.
The MIT website has an online copy of Structure and Interpretation of Computer Programs as well as videos of the MIT 6.001 lectures using Scheme, recorded at HP in 1986. These form a great introduction to language design.
I would highly recommend Structure and Interpretation of Computer Programs (SICP) as a starting point. This book will introduce the idea of what it means to write an interpreter (and a compiler), and is generally a must-read for anybody designing languages.
Implementing an interpreter for a functional language isn't likely to be too much different from implementing an interpreter for any other general purpose language. There's lexical analysis, parsing, AST construction, semantic analysis, plus execution (for a pure interpreter) or code generation and optimisation (for a compiler, even compiling to bytecode like Java/Perl/Python). SICP will introduce the difference between "applicative order" and "normal order" evaluation, which may be important for you in a pure functional context.
For just about any language interpreter or compiler, the main issues are the same, I think.
You need to decide certain basic characteristics of the language (semantics, not syntax), and the bulk of the design of the thing follows from that.
For example, does your language have
a type system? If so, what sorts of
types does it have? Is it going to be
statically typed, dynamically typed,
duck-typed?
What sort of expressions are you
planning to support? Do you need to
define an order of operations? Will
you even have operators?
What will you use as the run-time
representation of the program? Will
you convert the text to a byte-code
representation, or an AST, or a
tokenized form of the source text?
There are toolkits available to help take some of the tedium out of the actual parsing of text (ANTLR and Bison, to name two), but I don't know of anything that helps with the actual interpretation part of the task. I'm sure somebody will suggest something.
The main issue is having a semantics for the language you're implementing -- with that, the implementation becomes straightforward. Otherwise, this question is incredibly broad and hard to answer.
I'd recommend Essentials of Programming Languages as a good complement to SICP, particularly if you're interested in interpreters: Official EOPL site. You may want to check out the third edition-- the site hasn't been updated for it yet.
Edit: spam prevention is making me choose between links, so the official page is now unheated. It's easily Google-able, though.

Resources