What does Qt Quick Compiler do exactly? - qt

What does Qt Quick Compiler do exactly? My understanding was that it "compiles" QML/JS into C++ and integrates this into the final binary/executable. So, there is no JIT compilation or any other JS-related things during runtime.
However, I saw somewhere an article that claimed that it's not like this and actually it only "bundles" QML/JS into final binary/executable, but there is still some QML/JS-related overhead during runtime.
At the documentation page there is this explanation:
.qml files as well as accompanying .js files can be translated into
intermediate C++ source code. After compilation with a traditional
compiler, the code is linked into the application binary.
What is this "intermediate C++ source code"? Why not just "C++ source code"? That confuses me, but the last statement kinda promises that yes, it is a C++ code, and after compiling it with C++ compiler you will have a binary/executable without any additional compiling/interpretation during runtime.
Is it how it actually is?

The code is of an intermediate nature because it doesn't map Javascript directly to C++. E.g. var i = 1, j = 2, k = i+j is not translated to the C++ equivalent double i = 1., j = 2., k = i+j. Instead, the code is translated to a series of operations that directly manipulate the state of the JS virtual machine. JS semantics are not something you can get for free from C++: there will be runtime costs no matter how you implement it. There is no additional compiling nor interpretation, but the virtual machine that implements the JS state still has to exist.
That's not an overhead easy to get rid of without emitting a lot mostly dead code to cover all contexts in which a given piece of code might run, or doing just-in-time compilation that you wanted to avoid. That's the primary problem with JavaScript: its semantics are such that it's generally not possible to translate it to typical imperative statically typed code that gives rise to "standard" machine code.

Your question already contains the answer.
It compiles the code into C++, that is of intermediate nature as it is not enough to have C++-Code. You need binaries. So after the compilation to C++, the files are then compiled into binaries. Those are then linked.
The statement only says: We do not compile to binary, but to C++ instead. You need to compile it into a binary with your a C++-Compiler of your choice.
The bundeling happens, if you only put it into the resources (qrc-file). Putting it into the resources does not imply that you use the compiler.
Then there is the JIT compiler, that might (on supported platforms) do a Just-in-Time-Compilation. More on this here

Related

How are we actually supposed to include our OpenCL code?

How are we actually supposed to include our OpenCL code in our C projects?
We can't possibly be supposed to ship our .cl files along with our executable for the executable to find them and load them at runtime, because that's stupid, right?
We can't be supposed to use some stringify macro because a) that's apparently not portable/leads to undefined behaviour and b) it all breaks down if you use commas not enclosed in brackets like when defining many variables of the same type, I've spent an hour here looking for a solution to that and there doesn't seem to be one that actually works and c) that's kind of stupid.
Are we expected to write our code into C string literals like "int x, y;\n" "float4 p;\n"? Because I'm not doing that. Are we supposed to do a C include-style hexdump of our .cl files? That seems inconvenient. What are we actually supposed to do?
It's bad enough that all these approaches basically mean that you have to ship your program with your OpenCL code essentially open sourced when your OpenCL code is probably the last thing you want open sourced, on top of it it seems every OpenCL project I've seen uses one of the approaches listed above, it just doesn't seem right at all, it's like the people who made OpenCL forgot about something.
This thread: OpenCL bytecode running on another card mentions SPIR, a "platform-portable intermediate representation for OpenCL device programs". Other than that, you are basically restrained to the options you already mentioned.
Personally, I began to use C++11 raw string literals to get rid of my nasty stringify-macros. Don't know if C++ is an option for you, however.
Concerning your rejection of the "ship our .cl files along with our executable" approach: I don't see why this is inherently stupid -- the CL "shaders" are an application resource like all other separate files beside the executable, and thus are part of the "application bundle". It's perfectly reasonable to have such kind of files, and each operating system has its way to deal with it (in win32, the program directory is the bundle https://blogs.msdn.microsoft.com/oldnewthing/20110620-00/?p=10393 , OSX has its own bundle concept, etc...).
Now, if you are worried about other people peeking into your OpenCL code, you can still apply some obfuscation methods (e.g. encrypt your .cl-files by a key which is more or less cleverly hidden in your executable).
[edit/sidenote]: We could also investigate how other companies deal with this issue in the context of, for example, OpenGL/Direct3D shaders. In my limited experience, gaming companies tend to dump their shaders in text form somewhere in their application directory, for all to see (and even to tamper with). So in the gaming world at least, there is no great deal of secrecy in that respect... Wonder what adobe or CAD software companies do in their professional software.

Ada code instrumentation as GNAT compilation part?

I'm looking for a best way to integrate GNAT compiler with our custom code analysis/modification tools. We are using custom tools to perform different code metrics (like execution time, test coverage and etc) and even do some code obfuscation. So for example for measuring code execution time I need to insert 2 procedure calls into each function/procedure (the first one where the function starts, and the other one for each function exit). The code for these 2 procedures are implemented in a separate translation unit. What is the best way to do these code instrumentations (insert/modify code) with GNAT compiler in terms of simplicity and performance? I can think of these several ways:
Does GNAT compiler support code generation plugins of any kind? Seem's that it doesn't, but maybe I missed something while googling about it. Maybe there is a way to do it using some metaprogramming tricks (like in some modern programming languages like Nimrod and D), but I couldn't find if Ada even supports metaprogramming at all.
Looks like the ASIS library can help me out, but it is made for creating separate tools. Is it possible to integrate ASIS-based tool with GNAT? So for example to write a tool that would be loaded by GNAT during compilation, and would modify nodes in the AST before it (the AST) is about to be transformed into GIMPLE. Using the ASIS-based tool separately (for example by preprocessing each source file before passing it to compiler) may reduce compilation time, as source code will need to be parsed twice (by the tool and by the compiler) and be saved/loaded to/from some temporary location on disk.
Is it possible to get GIMPLE from GNAT compiler, modify and pass it to GCC? I couldn't find if there is a working GIMPLE front-end inside GCC, but it seems that GIMPLE is used internally only. I can dump it with GCC compiler, but I can't recompile modified GIMPLE afterwards (seems that there is no GIMPLE front-end for GCC).

convert JIT to EXE?

Since so there are so many JIT implementation out there, every JIT emits native code. Then why hasn't someone made a tool like JIT2EXE, to save the native code to a native executable?
The question is kind of vague as you have not clearly specified what language you are talking about, in my area of .NET, the .NET executables are pre-jitted at runtime in order to speed up the loading times. The code can be generated to native code by a process known as NGEN which takes the .NET IL code and converts it in the process of a binary in which can be understood by the processor. Usually NGEN code are stored in the folder within 'C:\Windows\Assembly\NativeImages_{version}' where the version represents the .NET Framework version. Have a look here on CodeGuru by Jeffrey Richter, about NGEN and where it could be used and when to use it. Have a look here on Codeproject about this article on the statistics/comparisons with native binary code and also here as well by Daniel Pistelli.
You mean something like ngen?
As a matter of fact, there are many Java (or other interpreted languages)-to-native compilers. Ever heard of gcj?
http://gcc.gnu.org/java/
There are also mixed compilers that compile some critical parts to native code and keep the others as bytecode to save space. Harissa did this more than 10 years ago.
http://www.usenix.org/publications/library/proceedings/coots97/full_papers/muller/muller_html/usenix.html
The Java code is first compiled to C code, which is then passed to the regular C compiler in order to take advantage of its optimizations. Such code can turn out to be very fast.
Of course, such ahead-of-time compilation (as opposed to just-in-time compilation) cancels some of the advantages of the bytecode form (especially portability and low memory footprint), so real-world applications are rather rare.
What you state in your question ("noone has made a tool like JIT2EXE") is not quite true:
http://en.wikipedia.org/wiki/AOT_compiler

Compiled dynamic language

I search for a programming language for which a compiler exists and that supports self modifying code. I’ve heared that Lisp supports these features, but I was wondering if there is a more C/C++/D-Like language with these features.
To clarify what I mean:
I want to be able to have in some way access to the programms code at runtime and apply any kind of changes to it, that is, removing commands, adding commands, changing them.
As if i had the AstTree of my programm. Of course i can’t have that tree in a compiled language, so it must be done different. The compile would need to translate the self-modifying commands into their binary equivalent modifications so they would work in runtime with the compiled code.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
Probably there is a reason Lisp is like it is? Lisp was designed to program other languages and to compute with symbolic representations of code and data. The boundary between code and data is no longer there. This influences the design AND the implementation of a programming language.
Lisp has got its syntactical features to generate new code, translate that code and execute it. Thus pre-parsed code is also using the same data structures (symbols, lists, numbers, characters, ...) that are used for other programs, too.
Lisp knows its data at runtime - you can query everything for its type or class. Classes are objects themselves, as are functions. So these elements of the programming language and the programs also are first-class objects, they can be manipulated as such. Dynamic language has nothing to do with 'dynamic typing'.
'Dynamic language' means that the elements of the programming language (for example via meta classes and the meta-object protocol) and the program (its classes, functions, methods, slots, inheritance, ...) can be looked at runtime and can be modified at runtime.
Probably the more of these features you add to a language, the more it will look like Lisp. Since Lisp is pretty much the local maximum of a simple, dynamic, programmable programming language. If you want some of these features, then you might want to think which features of your other program language you have to give up or are willing to give up. For example for a simple code-as-data language, the whole C syntax model might not be practical.
So C-like and 'dynamic language' might not really be a good fit - the syntax is one part of the whole picture. But even the C syntax model limits us how easy we can work with a dynamic language.
C# has always allowed for self-modifying code.
C# 1 allowed you to essentially create and compile code on the fly.
C# 3 added "expression trees", which offered a limited way to dynamically generate code using an object model and abstract syntax trees.
C# 4 builds on that by incorporating support for the "Dynamic Language Runtime". This is probably as close as you are going to get to LISP-like capabilities on the .NET platform in a compiled language.
You might want to consider using C++ with LLVM for (mostly) portable code generation. You can even pull in clang as well to work in C parse trees (note that clang has incomplete support for C++ currently, but is written in C++ itself)
For example, you could write a self-modification core in C++ to interface with clang and LLVM, and the rest of the program in C. Store the parse tree for the main program alongside the self-modification code, then manipulate it with clang at runtime. Clang will let you directly manipulate the AST tree in any way, then compile it all the way down to machine code.
Keep in mind that manipulating your AST in a compiled language will always mean including a compiler (or interpreter) with your program. LLVM is just an easy option for this.
JavaScirpt + V8 (the Chrome JavaScript compiler)
JavaScript is
dynamic
self-modifying (self-evaluating) (well, sort of, depending on your definition)
has a C-like syntax (again, sort of, that's the best you will get for dynamic)
And you now can compile it with V8: http://code.google.com/p/v8/
"Dynamic language" is a broad term that covers a wide variety of concepts. Dynamic typing is supported by C# 4.0 which is a compiled language. Objective-C also supports some features of dynamic languages. However, none of them are even close to Lisp in terms of supporting self modifying code.
To support such a degree of dynamism and self-modifying code, you should have a full-featured compiler to call at run time; this is pretty much what an interpreter really is.
Try groovy. It's a dynamic Java-JVM based language that is compiled at runtime. It should be able to execute its own code.
http://groovy.codehaus.org/
Otherwise, you've always got Perl, PHP, etc... but those are not, as you suggest, C/C++/D- like languages.
I don’t want to be dependent on an VM, thats what i meant with compiled :)
If that's all you're looking for, I'd recommend Python or Ruby. They can both run on their own virtual machines and the JVM and the .Net CLR. Thus, you can choose any runtime you want. Of the two, Ruby seems to have more meta-programming facilities, but Python seems to have more mature implementations on other platforms.

Flex - AS3 vs. MXML - Is there a compilation speed difference, and how does the mxml compiler work?

Does MXML get compiled down to as3 and then converted to flash bytecode? Also, is there a significant performance penalty to compiling mxml vs compiling as3?
Yes it boils down to AS3, most, possibly all MXML components are just a tag version of an AS3 class.
There is no raw difference in compile times, however, because MXML requires the Flex framework, MXML projects do take longer to compile.
So... in a way, MXML is slower, but not dramatically so.
This is a bit of a correction of Jasconius.
All MXML functions as a form of pre-compiler directive to generate a Class. mxmlc.exe will convert it to a series of temporary .as files before running the final compiler. Actually, you can see how the compiler does this by using the instruction keep-generated-actionscript.
Because this is a two-step process, this will mean that it will always take longer to compile something written with MXML. But, even on slower machines and large projects, that will not cause significant difficulties -- the real problem comes in converting everything into bytecode. But, this is not without benefit.
The major bonus of the MXML syntax is that it is easier to read, it is easier to conceptualize, and it is easier to debug. It also makes it much simpler to separate form and content. Any time you might loose in the compilation process, you will gain back ten-fold while programming.
A simple hello world application in flex will be more than 100k in flex compared to a couple of kbs in a pure AS3 project. This is because flex compiles a lot of dependencies into your swf. So yeah, there is a penalty in terms of bandwidth required.
Compile a simple flex app after adding -keep to additional compiler arguments field in the project|properties|flex compiler and then check the autogenerated folder named generated in your source folder to see the stuff that the compiler generates.

Resources