I know there is a list-comprehension library for common lisp (incf-cl), I know they're supported natively in various other functional (and some non-functional) languages (F#, Erlang, Haskell and C#) - is there a list comprehension library for Scheme?
incf-cl is implemented in CL as a library using macros - shouldn't it be possible to use the same techniques to create one for Scheme?
Swindle is primarily a CLOS emulator library, but it has list comprehensions too. I've used them, they're convenient, but the version I used was buggy and incomplete. (I just needed generic functions.)
However, you probably want SRFI-42. I haven't used it, but it HAS to have fewer bugs than the Swindle list comprehensions.
I don't know which Scheme you use. PLT Scheme bundles Swindle and SRFI-42. Both are supposed to be cross-Scheme compatible, though.
If you use PLT Scheme, here is SRFI-42's man page. You say (require srfi/42) to get it.
You can use LINQ for R6RS Scheme (although it could be made to run under 'older' implementations).
Related
Julia doesn't support something like a switch-case a control structure, at least according to the current documentation of Control Flow?
switch case is a common Flow Control in imperative or object oriented languages, why not in julia?
Language supporting switch-case (not completed)
C/C++
Java
Pascal
PHP
Javascript
Typescript
Octave
The basic philosophy of julia is to provide most functionality as packages and keep the core (Base) ultra-lean. So the answer to "why doesn't Julia support X" is usually "Julia supports X via package Y". In this case, the Match.jl provides a switch-case like structure that is very powerful.
There is also a Switch.jl package that is very close to C's switch, but it is not actively maintained.
There is extensive discussion about including this in julia language. It will probably happen at some point but maybe not until after v1.0.
See here for the main discussion (includes links to other discussions): https://github.com/JuliaLang/julia/issues/18285
& this one is informative too (but now closed in favour of the above):
https://github.com/JuliaLang/julia/issues/5410
edit
Also it is worth mentioning that julia need not offer syntax for switch-case because it might be best (in terms of features) to have it implemented as a macro (i.e. via metaprogramming) which need not be included in Base julia.
I recently started programming in Julia for research purposes. Going through it I started loving the syntax, I positively experienced the community here in SO and now I am thinking about porting some code from other programming languages.
Working with highly computational expensive forecasting models, it would be nice to have them all in a powerful modern language as Julia.
I would like to create a project and I am wondering how I should design it. I am concerned both from a performance and a language perspective (i.e.: Would it be better to create modules – submodules – functions or something else would be preferred? Is it better off to use dictionaries or custom types?).
I have looked at different GitHub projects in my field, but I haven't really found a common standard. Therefore I am wondering: what is more in the spirit of the Julia language and philosophy?
EDIT:
It has been pointed out that this question might be too generic. Therefore, I would like to focus it on how it would be better structuring modules (i.e. separate modules for main functions and subroutines versus modules and submodules, etc.). I believe this would be enough for me to have a feel about what might be considered in the spirit of the Julia language and philosophy. Of course, additional examples and references are more than welcome.
The most you'll find is that there is an "official" style-guide. The rest of the "Julian" style is ill-defined, but there are some ways to heuristically define it.
First of all, it means designing the software around multiple dispatch and the type system. A software which follows a Julian design philosophy usually won't be defining a bunch of functions like test_pumpkin and test_pineapple, instead it will use dispatches on test for types Pumpkin and Pineapple. This allows for clean/understandable code. It will break tasks up into small type-stable functions which will allow for good performance. It likely will also be written very generically, allowing the user to use items that are subtypes of AbstractArray or Number, and using the power of dispatch to allow their software to work on numbers they've never even heard of. (In this respect, custom types are recommended over dictionaries when you need performance. However, for a type you have to know all of the fields at the beginning, which means some things require dictionaries.)
A software which follows a Julian design philosophy may also implement a DSL (Domain-Specific Language) to allow a simpler interface to the user. Instead of requiring the user to conform to archaic standards derived from C/Fortran, or write large repetitive items and inputs, the package may provide macros to allow the user to more heuristically define the problem for the software to solve.
Other items which are part of the Julian design philosophy are up for much debate. Is proper Julia code devectorized? I would say no, and the loop fusing broadcast . is a powerful way to write MATLAB-style "vectorized" code and have it be perform like a devectorized loop. However, I have seen others prefer devectorized styles.
Also note that Julia is very different from something like Python where in Julia, you can essentially "build your own standard way of doing something". Since there's no performance penalty for functions/types declared in packages rather than Base, you can build your own Julia world if you want, using macros to define your own "function-like" objects, etc. I mean, you can re-create Java styles in Julia if you wanted.
Recently I had to write some code involving math formulas in Clojure and I realized that there is the Java java.lang.Math library of functions and there is the clojure.math.numeric-tower library of functions.
Is this the accepted way to use math functions in Clojure, pulling from two different places to get the full complement? Or am I supposed to just use Math? Or something else?
Using both or either as appropriate seems to be the norm.
Can I write a DLL file that exports functions for use from or that use Common Lisp?
Each Common Lisp implementation has a different way to extend it from various foreign languages. Which implementation do you intend to use?
The GNU CLISP implementation allows one to define external modules written in C that expose objects, symbols, and functions to Lisp. The documentation for writing an external module is complete, but you'll likely find it difficult to integrate this into the rest of your build process, unless you're already using make or shell scripts to automate portions of it.
Alternately, you can turn the question around and ask how do you access C libraries from Common Lisp. Again, most implementations have a foreign function interface, or FFI that allows them to reach out to various other languages. CLISP has an FFI, but you can also use a package like CFFI for portability among Common Lisp implementations. The CLISP documentation describes the trades in these two approaches.
ECL may be another good choice for you if you intend to embed Common Lisp within your C program.
(..i'm not 100% sure what you mean, but i'll just throw some bits out there and see what happens..)
Most Lisps can do the C <--> Lisp type thing by ways of FFI, and there are compatibility layers/libraries for doing FFI like the already mentioned CFFI.
So you can pretty much always have Lisp call C functions and have C call Lisp functions, and most do it by loading .dll/.so files into the already running Lisp process. Note that this tends to be what other environments like Python (PyGTK etc.) do too. This is often exactly what you want, so you might perhaps want to ignore most of what I say below.
The only Lisp I can think of that enables one to do things the "other way around", i.e., load a .dll/.so which "is" Lisp or is produced by Lisp into an already running C process, is ECL.
In many cases it really does not matter where you put the entry point or the "main() function" to use C terms, so if you'd like to use some other Lisp besides ECL but are thinking you "can't because .." this is something to reconsider since, yeah, you can in many cases just shuffle thing around a bit.
However, it's almost always a much better idea to user other IPC mechanisms and avoid any kind of FFI when you can.
I search for a programming language for which a compiler exists and that supports self modifying code. I’ve heared that Lisp supports these features, but I was wondering if there is a more C/C++/D-Like language with these features.
To clarify what I mean:
I want to be able to have in some way access to the programms code at runtime and apply any kind of changes to it, that is, removing commands, adding commands, changing them.
As if i had the AstTree of my programm. Of course i can’t have that tree in a compiled language, so it must be done different. The compile would need to translate the self-modifying commands into their binary equivalent modifications so they would work in runtime with the compiled code.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
Probably there is a reason Lisp is like it is? Lisp was designed to program other languages and to compute with symbolic representations of code and data. The boundary between code and data is no longer there. This influences the design AND the implementation of a programming language.
Lisp has got its syntactical features to generate new code, translate that code and execute it. Thus pre-parsed code is also using the same data structures (symbols, lists, numbers, characters, ...) that are used for other programs, too.
Lisp knows its data at runtime - you can query everything for its type or class. Classes are objects themselves, as are functions. So these elements of the programming language and the programs also are first-class objects, they can be manipulated as such. Dynamic language has nothing to do with 'dynamic typing'.
'Dynamic language' means that the elements of the programming language (for example via meta classes and the meta-object protocol) and the program (its classes, functions, methods, slots, inheritance, ...) can be looked at runtime and can be modified at runtime.
Probably the more of these features you add to a language, the more it will look like Lisp. Since Lisp is pretty much the local maximum of a simple, dynamic, programmable programming language. If you want some of these features, then you might want to think which features of your other program language you have to give up or are willing to give up. For example for a simple code-as-data language, the whole C syntax model might not be practical.
So C-like and 'dynamic language' might not really be a good fit - the syntax is one part of the whole picture. But even the C syntax model limits us how easy we can work with a dynamic language.
C# has always allowed for self-modifying code.
C# 1 allowed you to essentially create and compile code on the fly.
C# 3 added "expression trees", which offered a limited way to dynamically generate code using an object model and abstract syntax trees.
C# 4 builds on that by incorporating support for the "Dynamic Language Runtime". This is probably as close as you are going to get to LISP-like capabilities on the .NET platform in a compiled language.
You might want to consider using C++ with LLVM for (mostly) portable code generation. You can even pull in clang as well to work in C parse trees (note that clang has incomplete support for C++ currently, but is written in C++ itself)
For example, you could write a self-modification core in C++ to interface with clang and LLVM, and the rest of the program in C. Store the parse tree for the main program alongside the self-modification code, then manipulate it with clang at runtime. Clang will let you directly manipulate the AST tree in any way, then compile it all the way down to machine code.
Keep in mind that manipulating your AST in a compiled language will always mean including a compiler (or interpreter) with your program. LLVM is just an easy option for this.
JavaScirpt + V8 (the Chrome JavaScript compiler)
JavaScript is
dynamic
self-modifying (self-evaluating) (well, sort of, depending on your definition)
has a C-like syntax (again, sort of, that's the best you will get for dynamic)
And you now can compile it with V8: http://code.google.com/p/v8/
"Dynamic language" is a broad term that covers a wide variety of concepts. Dynamic typing is supported by C# 4.0 which is a compiled language. Objective-C also supports some features of dynamic languages. However, none of them are even close to Lisp in terms of supporting self modifying code.
To support such a degree of dynamism and self-modifying code, you should have a full-featured compiler to call at run time; this is pretty much what an interpreter really is.
Try groovy. It's a dynamic Java-JVM based language that is compiled at runtime. It should be able to execute its own code.
http://groovy.codehaus.org/
Otherwise, you've always got Perl, PHP, etc... but those are not, as you suggest, C/C++/D- like languages.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
If that's all you're looking for, I'd recommend Python or Ruby. They can both run on their own virtual machines and the JVM and the .Net CLR. Thus, you can choose any runtime you want. Of the two, Ruby seems to have more meta-programming facilities, but Python seems to have more mature implementations on other platforms.