I wonder if there is a way to introspect XQuery modules and dynamically access functions. This would help me to implement the GOF strategy pattern as XQuery does not have support for multiple implementations for the same interface.
The problem, of course, is that there seems to be no support for introspection in XQuery except for checking for types of variables.
Any ideas on how this pattern can be implemented in XQuery? (I use MarkLogic 9)
thanks a lot,
K.
PS. Unfortunately, most XQuery resources I could find focus on small details of this or that functionality but I could not find one such resource that takes XQuery as a serious programming language on its own right and addresses such software design issues
XQuery 3.0+ and MarkLogic 9 support first-class functions. In many cases, you can rework common OO design patterns into functional programming equivalents using first-class functions and dependency injection.
Also, you can access in-scope functions via xdmp:functions().
There is a way to get reflection-like functionality in ML, but it's kind of a hack. Take a look at this library, which throws an exception, catches it, and uses the exception payload to inspect the stack:
https://github.com/marklogic/cq/blob/master/lib-debug.xqy
For more XQuery programming patterns and techniques, you might want to review the proceedings from various XML-oriented conferences like Balisage, XML Prague, and XML London. Also, searching GitHub repos for XQuery projects and reading code can be helpful.
Related
I'm very interested in dataflow and concurrency focused languages. I've read up on the subject and repeatedly I see SIGNAL, Esterel, and Lustre mentioned; so I take it they're prominent players in those fields. However, many of their links in the resources I found are dead and they don't seem very accessible. I managed to find a couple compilers I can compile from source (Polychrony Toolset for SIGNAL and the Columbia Compiler for Esterel) but they've both had issues when trying to compile with cmake. Even textbooks teaching these languages have been tough to come by.
With the background of the way, my actual questions are: is anyone really familiar with this field of programming? Are these languages still big deals, or have they "died out" by now? Could it be they're just available to big companies with a hefty price tag, so the average programmer wouldn't really be able to pick those languages up?
I ran into a couple other dataflow/concurrent paradigm languages, such as Oz or E, but they seemed to be mostly for education and not suitable for real world projects. Not to say they aren't impressive languages, but their implementation was limited and it would be unlikely to see them in production contexts. Does anyone know of other languages in this field they can recommend that are actually accessible (have good documentation, tutorials, and an installable compiler to actually code in)? Or can anyone clarify a language such as Oz or E and hopefully show that they indeed are good enough for large real world projects?
All the languages you mentioned are not widespread. This means their compilers and runtime have bugs, the community is narrow and can give little help, and linking with general purpose libraries can be problematic.
I recommend to use an actively supported general purpose language such as Java, Scala, Kotlin or C++. They all have libraries to support asynchronous computations, and dataflow is no more than support of asynchronous procedure call. You even can develop your own dataflow library. This is not that hard: I wrote a dataflow library for Java which is only 40 kilobytes of source code.
Have you tried Céu? It is a recent variant of Esterel, and compiles to C. It is simple to understand, and provides a reactive and concurrent structuring of control flow. Native C calls can be made by just prefixing them with an underscore ("_printf").
http://ceu-lang.org
Also, see the paper "Structured Synchronous Reactive Programming with Céu" for a nice overview.
http://www.ceu-lang.org/chico/ceu_mod15_pre.pdf
These academics languages mostly disappeared as such and are used in industrial tools
Esterel-Lustre are the basis of in Ansys' SCADE
Signal is used in 3DS' ControlBuild
Esterel was used in Synopsys' ConcentricStudio.
Researchers use also Heptagon for synchronous language studies for code generation, formal methods, new concepts.
I recently started programming in Julia for research purposes. Going through it I started loving the syntax, I positively experienced the community here in SO and now I am thinking about porting some code from other programming languages.
Working with highly computational expensive forecasting models, it would be nice to have them all in a powerful modern language as Julia.
I would like to create a project and I am wondering how I should design it. I am concerned both from a performance and a language perspective (i.e.: Would it be better to create modules – submodules – functions or something else would be preferred? Is it better off to use dictionaries or custom types?).
I have looked at different GitHub projects in my field, but I haven't really found a common standard. Therefore I am wondering: what is more in the spirit of the Julia language and philosophy?
EDIT:
It has been pointed out that this question might be too generic. Therefore, I would like to focus it on how it would be better structuring modules (i.e. separate modules for main functions and subroutines versus modules and submodules, etc.). I believe this would be enough for me to have a feel about what might be considered in the spirit of the Julia language and philosophy. Of course, additional examples and references are more than welcome.
The most you'll find is that there is an "official" style-guide. The rest of the "Julian" style is ill-defined, but there are some ways to heuristically define it.
First of all, it means designing the software around multiple dispatch and the type system. A software which follows a Julian design philosophy usually won't be defining a bunch of functions like test_pumpkin and test_pineapple, instead it will use dispatches on test for types Pumpkin and Pineapple. This allows for clean/understandable code. It will break tasks up into small type-stable functions which will allow for good performance. It likely will also be written very generically, allowing the user to use items that are subtypes of AbstractArray or Number, and using the power of dispatch to allow their software to work on numbers they've never even heard of. (In this respect, custom types are recommended over dictionaries when you need performance. However, for a type you have to know all of the fields at the beginning, which means some things require dictionaries.)
A software which follows a Julian design philosophy may also implement a DSL (Domain-Specific Language) to allow a simpler interface to the user. Instead of requiring the user to conform to archaic standards derived from C/Fortran, or write large repetitive items and inputs, the package may provide macros to allow the user to more heuristically define the problem for the software to solve.
Other items which are part of the Julian design philosophy are up for much debate. Is proper Julia code devectorized? I would say no, and the loop fusing broadcast . is a powerful way to write MATLAB-style "vectorized" code and have it be perform like a devectorized loop. However, I have seen others prefer devectorized styles.
Also note that Julia is very different from something like Python where in Julia, you can essentially "build your own standard way of doing something". Since there's no performance penalty for functions/types declared in packages rather than Base, you can build your own Julia world if you want, using macros to define your own "function-like" objects, etc. I mean, you can re-create Java styles in Julia if you wanted.
I am very new to c++ and confused between what is the difference between modular programming and function oriented programming.I have never done modular programming so I just know modules by definition that it contains functions.So what is the difference between a sequential(function-oriented language)and modular programming?Thanks in advance.
EDIT:
I was reading about C++'s OOP.It started something like what is unstructured programming, than gave a basic idea about structured programming, than modular programming and finally,OOP.
Modular programming is mostly a strategy to reduce coupling in a computer program, mostly by means of encapsulation.
Before modular programming, local coherence of the code was ensured by structured programming, but global coherence was lacking: if you decided that your spell-checking dictionary would be implemented as a red-black tree, then this implementation would be exposed to everyone else in the program, so that the programmer working on, say, text rendering, would be able to access the red-black tree nodes to do meaningful things with them.
Of course, this became hell once you needed to change the implementation of your dictionary, because then you would have to fix the code of other programmers as well.
Even worse, if the implementation detail involved global variables, then you had to be exceedingly careful of who changed them and in what order, or strange bugs would crop up.
Modular programming applied encapsulation to all of this, by separating the implementation (private to the module) from the interface (what the rest of the program can use). So, a dictionary module could expose an abstract type that would only be accessible through module functions such as findWord(word,dictionary). Someone working on the dictionary module would never need to peek outside that module to check if someone might be using an implementation detail.
They are both ways of structuring your code. If your interested in function-oriented programming and want to understand it a bit better, I'd take a look at lisp. C++ isn't truly function oriented as every function should return a value yet C++ functions can return void (making it a procedure rather than a function), so it's not a true functional programming language in the sense.
"I have never done modular programming so I just know modules by definition that it contains functions".
Modules are a level higher than functions.
That's a good start. Think of a function as a unit of work that does something and when you have several functions that you can group in a certain way, you put them in a module. So, string.h has a bunch of functions for working with strings, but you simply include the header and you have access to all those functions directly. You can then reuse those modules in other projects as you'd already used the modules previously and they've been (I assume) debugged and tested and stop people from reinventing the wheel. The whole point is to benefit from the cumulative experience.
I'd suggest you think of a project you'd like and write some functions and think about how you'd like to organize the code for another developer to use.
Hope this is of some use to you.
I believe functional programming leads us to micro services paradigm as for now while modular programming tends to similar to OOP concept.
I search for a programming language for which a compiler exists and that supports self modifying code. I’ve heared that Lisp supports these features, but I was wondering if there is a more C/C++/D-Like language with these features.
To clarify what I mean:
I want to be able to have in some way access to the programms code at runtime and apply any kind of changes to it, that is, removing commands, adding commands, changing them.
As if i had the AstTree of my programm. Of course i can’t have that tree in a compiled language, so it must be done different. The compile would need to translate the self-modifying commands into their binary equivalent modifications so they would work in runtime with the compiled code.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
Probably there is a reason Lisp is like it is? Lisp was designed to program other languages and to compute with symbolic representations of code and data. The boundary between code and data is no longer there. This influences the design AND the implementation of a programming language.
Lisp has got its syntactical features to generate new code, translate that code and execute it. Thus pre-parsed code is also using the same data structures (symbols, lists, numbers, characters, ...) that are used for other programs, too.
Lisp knows its data at runtime - you can query everything for its type or class. Classes are objects themselves, as are functions. So these elements of the programming language and the programs also are first-class objects, they can be manipulated as such. Dynamic language has nothing to do with 'dynamic typing'.
'Dynamic language' means that the elements of the programming language (for example via meta classes and the meta-object protocol) and the program (its classes, functions, methods, slots, inheritance, ...) can be looked at runtime and can be modified at runtime.
Probably the more of these features you add to a language, the more it will look like Lisp. Since Lisp is pretty much the local maximum of a simple, dynamic, programmable programming language. If you want some of these features, then you might want to think which features of your other program language you have to give up or are willing to give up. For example for a simple code-as-data language, the whole C syntax model might not be practical.
So C-like and 'dynamic language' might not really be a good fit - the syntax is one part of the whole picture. But even the C syntax model limits us how easy we can work with a dynamic language.
C# has always allowed for self-modifying code.
C# 1 allowed you to essentially create and compile code on the fly.
C# 3 added "expression trees", which offered a limited way to dynamically generate code using an object model and abstract syntax trees.
C# 4 builds on that by incorporating support for the "Dynamic Language Runtime". This is probably as close as you are going to get to LISP-like capabilities on the .NET platform in a compiled language.
You might want to consider using C++ with LLVM for (mostly) portable code generation. You can even pull in clang as well to work in C parse trees (note that clang has incomplete support for C++ currently, but is written in C++ itself)
For example, you could write a self-modification core in C++ to interface with clang and LLVM, and the rest of the program in C. Store the parse tree for the main program alongside the self-modification code, then manipulate it with clang at runtime. Clang will let you directly manipulate the AST tree in any way, then compile it all the way down to machine code.
Keep in mind that manipulating your AST in a compiled language will always mean including a compiler (or interpreter) with your program. LLVM is just an easy option for this.
JavaScirpt + V8 (the Chrome JavaScript compiler)
JavaScript is
dynamic
self-modifying (self-evaluating) (well, sort of, depending on your definition)
has a C-like syntax (again, sort of, that's the best you will get for dynamic)
And you now can compile it with V8: http://code.google.com/p/v8/
"Dynamic language" is a broad term that covers a wide variety of concepts. Dynamic typing is supported by C# 4.0 which is a compiled language. Objective-C also supports some features of dynamic languages. However, none of them are even close to Lisp in terms of supporting self modifying code.
To support such a degree of dynamism and self-modifying code, you should have a full-featured compiler to call at run time; this is pretty much what an interpreter really is.
Try groovy. It's a dynamic Java-JVM based language that is compiled at runtime. It should be able to execute its own code.
http://groovy.codehaus.org/
Otherwise, you've always got Perl, PHP, etc... but those are not, as you suggest, C/C++/D- like languages.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
If that's all you're looking for, I'd recommend Python or Ruby. They can both run on their own virtual machines and the JVM and the .Net CLR. Thus, you can choose any runtime you want. Of the two, Ruby seems to have more meta-programming facilities, but Python seems to have more mature implementations on other platforms.
Some time back I was working on an algorithm that processed code, and required a reflections API. We were interested in its implementation for multiple languages, but the reflections API for a language would not work for any other language. So is there any thing like a "universal reflections API" that would work for all languages, or maybe for a few mainstream languages (.NET,Java,Ruby,Python)
If there isnt any, is it possible to build such a thing that can process classes from different languages.
How would you go about having a unified way to process OO code from multiple languages
I don't believe there is universal Reflection API. Any Reflection API depends on the metadata that the compiler generates for the language constructs and these can vary quite a lot from language to language, even though there is a common subset across multiple languages.
In .NET there is CodeDOM, which provides a way to generate a universal syntax tree and then serialize it as (C#, VB .NET etc...) code and/or compile it. Of course that's the mirror image of Reflection, but if anyone ever writes a tool to generate the AST directly from IL the functionality could start to overlap.
In any case its the closest thing I can think of.
A reflection API depends on the metadata generated for the code, so you can have a universal API for all languages on the JVM, or all languages on the CLR...but it wouldn't really be possible to make one that does Python, Java, and VB etc...
If you want a universal API, you need to step outside the language. See our DMS meta-tool for processing arbitrary languages, and answering arbitrary questions, including those you think of as reflection.
(Op asked for support for various languages: DMS has full parsers for C#, VB.net, Java, and Python. Ruby not yet in the list; we're working on it).