How to create bindings qml to d? - qt

I want to use qml with d language. But there is not bindings to d, and I want to create it. But I don't know how to begin. Tell me, please, how to begin to create bindings.

Since nobody answered:
From what I understand QML is the modelling language of Qt and I guess it depends heavily on Qt. I assume here it depends on Qt, at least to some extend.
First of all there was already an attempt to bind Qt to D: http://www.dsource.org/projects/qtd, but from what I've heared this project is kinda dead and not developed anylonger (last commit 2 years ago). You could use it as base for your work or as a reference on how you could bind QML and Qt.
1. Option, a C/C++ Glue-Layer
A C-Glue Layer means, you write your code basically twice. You write a complete C++ to C wrapper in C++ (the language which can directly interface with Qt and Qml). That means you wrap every method of a class inside a C function which takes a Pointer (to a struct representing this C++ Qt class). This could look like this (note this is an abstraction of GtkWebkit, which is written in C, but the snippet demonstrates how to do it quite well):
// somewhere in a header
typedef struct SurfiClient {
GtkWidget *window; // Offscreen window
// ....
}
typedef GdkPixbuf Pixbuf;
extern "C" {
Pixbuf* surfi_client_get_pixbuf(SurfiClient* client)
{
// in C++ this would gtk_offscreen_window_get_pixbuf would be a method of client->window
return gtk_offscreen_window_get_pixbuf(GTK_OFFSCREEN_WINDOW(client->window));
}
// here go the rest of these functions, probably thousands
}
You have to basically do this for everything you want to interface later on from the D side. Even worse you have to also do it for namespaces and free functions which are not marked extern "C", this could look like this (libsquish):
typedef unsigned char u8;
extern "C" {
void CompressMasked(u8 const* rgba, int mask, void* block, int flags) {
squish::CompressMasked(rgba, mask, block, flags);
}
}
As you can see by now, this is quite tedious...
Let's assume you have finished making the C/C++ Glue-Layer, now you have to create code in D which can interface it.
To stay with the gtk example:
extern(C) {
// Using an opaque struct is one option
struct SurfiClient;
// the other is to wrap the struct correctly
struct SurfiClient {
GtkWidget *window;
}
// The Pixbuf was only a typedef to GdkPixbuf which is already an opaque data structure, easy
struct Pixbuf;
Pixbuf* surfi_client_get_pixbuf(SurfiClient* client);
}
You see in this example a problem, if you want to wrap the SurfiClient struct correctly, you also have to wrap GtkWidget or do it incorrectly and use void* instead of GtkWidget*, which is no real soloution to the problem. You will most likely also run into this problem, you Glue-Layer struct has members which you don't have abstractions for, I would go here with the opaque struct and provide functions for the members which are really needed for the user.
I am not going more into detail on how to interface with C, there are already a few guides:
http://dlang.org/interfaceToC.html
http://www.gamedev.net/blog/1140/entry-2254003-binding-d-to-c/
https://github.com/D-Programming-Deimos (Not a guide but a collection of C bindings, could be used as reference)
The last step in the process of makind Qt, Qml bindings would be to rebuild the OOP-Api in D based on your newly made C-Bindings.
2. Option, SWIG/Binding generator
I am not an expert with SWIG, that's the reason why I am only covering it in a few sentences.
What you can use SWIG for is generate the whole C/C++-Glue Layer thingy for you. If you're lucky, your SWIG-File only consists of a few includes to Qt and SWIG will do all the work for you. If not, you have to define rules for Classes and Functions on your own, which can be tedious (but also easier and faster than doing 1. Option). So SWIG is definitly worth a try!
As a side-note: If you have a template/macro/ heavy/Header only C++ Library like glm SWIG can be tricky or in case of glm no option.
There are alternative Binding-Generators, e.g. the PySide Project started with Boost.Python then switched to Shiboken. I don't know how easily you can generate bindings with Shiboken for anything else than CPython, maybe hacking into Shiboken or even Boost.Python could work? Also worth a read: http://setanta.wordpress.com/binding-c/.
QtD used QtJambi so this might be a good start.
3. Option, the D way
D has the great idea of having extern(C++), which allows in theory making C++/D bindings without such a Glue-Layer: http://dlang.org/cpp_interface.html.
Nice idea, but unfortunatly too limited. E.g. there is no support for namespaces yet (there is an open issue on bugzilla, I can't find right now). In my opinion extern(C++) is too limited for Qt.
Manu Evans mentioned in his first talk at the D conference how to bind to C++ from D sucessfully with using Ds metaprogramming capabilities.
In a nutshell
A C/C++ Glue-Layer gives you the most flexibility, it will work, but is no simple and especially a long task (I would do this for rather small projects).
SWIG/Binding generators, the way I would go for Qt, once correctly setup they do all the work for you (in the best case).
extern(C++), nice idea, too limited for most serious C++ projects.
I hope this gives you a short overview of what you can do and the amount of work it requires.

they are already an article on this purpose how to interface C code to D?
Usually is not hard. Take function declaration and put it into an extern(C) block
And usually these module are written into a c package. Example:
src/
`-- appName
|-- c
| `-- dInterface.d
`-- dwrapper.d
The module appName.c.dInterface will define C function with an extern(C) block
While the module appName.dwrapper will provide a way that fit more with dlang.

Related

SIGNAL & SLOT macros in Qt : what do they do?

I'm a beginner in Qt and trying to understand the SIGNAL and SLOT macros. When I'm learning to use the connect method to bind the signal and slot, I found the tutorials on Qt's official reference page uses:
connect(obj1, SIGNAL(signal(int)), obj2, SLOT(slot()))
However, this also works very well:
connect(obj1, &Obj1::signal, obj2, &Obj2::slot)
So what exactly do the macros SIGNAL and SLOT do? Do they just look for the signal in the class the object belongs to and return the address of it?
Then why do most programmers use these macros instead of using &Obj1::signal since the latter appears to be simpler and you don't need to change the code if the parameters of the signal function change?
The use of the SIGNAL and SLOT macros used to be the only way to make connections, before Qt 5. The connection is made at runtime and require signal and slots to be marked in the header. For example:
Class MyClass : public QObject
{
Q_OBJECT
signals:
void Signal();
slots:
void ASlotFunction();
};
To avoid repetition, the way in which it works is described in the QT 4 documentation.
The signal and slot mechanism is part of the C++ extensions that are provided by Qt and make use of the Meta Object Compiler (moc).
This explains why signals and slots use the moc.
The second connect method is much improved as the functions specified can be checked at the time of compilation, not runtime. In addition, by using the address of a function, you can refer to any class function, not just those in the section marked slots:
The documentation was updated for Qt 5.
In addition, there's a good blog post about the Qt 4 connect workings here and Qt 5 here.
Addition to the first answer.
what exactly did the macro SIGNAL and SLOT do
Almost nothing. Look at the qobjectdefs.h:
# define SLOT(a) "1"#a
# define SIGNAL(a) "2"#a
It just adds 1 or 2. It means that next code is valid and works as expected:
QObject *obj = new QObject;
connect(obj,"2objectNameChanged(QString)",this,"1show()");//suppose this is a pointer to a QDialog subclass
obj->setObjectName("newNAme");
why do most programmers use these macros instead of using like
&Obj1::signal
Because these macros work not only in Qt5.
Because with these macros there is no complexity with overloaded
signals (it can make your code very dirty and it is really not a simple thing)
Because with new syntax you sometimes need to use specific
disconnects
More details here.
To complete TheDarkKnight's answer, it is an excellent practice to refactor legacy code that is using the old Qt 4 SIGNAL and SLOT macros to Qt 5's new syntax using function address.
Suddenly, connection error will appear at compile time instead of at runtime! It's very easy to make a Qt 4 connection error as any spelling mistake will result in such an error. Plus, the name of the function must be the fully qualified name, i.e preceded with the full namespace if any.
Another benefit is the ability to use a lambda for the slot function, which can reduce need of a named function if the slot body is trivial.
These macros just convert their parameters to signal/slot-specific strings. The Differences between String-Based and Functor-Based Connections can be found in the docs. In short:
String-based:
Type checking is done at Run-time
Can connect signals to slots which have more arguments than the signal (using default parameters)
Can connect C++ functions to QML functions
Functor-based:
Type checking is done at Compile-time
Can perform implicit type conversions
Can connect signals to lambda expressions

What's the point of unique_ptr?

Isn't a unique_ptr essentially the same as a direct instance of the object? I mean, there are a few differences with dynamic inheritance, and performance, but is that all unique_ptr does?
Consider this code to see what I mean. Isn't this:
#include <iostream>
#include <memory>
using namespace std;
void print(int a) {
cout << a << "\n";
}
int main()
{
unique_ptr<int> a(new int);
print(*a);
return 0;
}
Almost exactly the same as this:
#include <iostream>
#include <memory>
using namespace std;
void print(int a) {
cout << a << "\n";
}
int main()
{
int a;
print(a);
return 0;
}
Or am I misunderstanding what unique_ptr should be used for?
In addition to cases mentioned by Chris Pitman, one more case you will want to use std::unique_ptr is if you instantiate sufficiently large objects, then it makes sense to do it in the heap, rather than on a stack. The stack size is not unlimited and sooner or later you might run into stack overflow. That is where std::unique_ptr would be useful.
The purpose of std::unique_ptr is to provide automatic and exception-safe deallocation of dynamically allocated memory (unlike a raw pointer that must be explicitly deleted in order to be freed and that is easy to inadvertently not get freed in the case of interleaved exceptions).
Your question, though, is more about the value of pointers in general than about std::unique_ptr specifically. For simple builtin types like int, there generally is very little reason to use a pointer rather than simply passing or storing the object by value. However, there are three cases where pointers are necessary or useful:
Representing a separate "not set" or "invalid" value.
Allowing modification.
Allowing for different polymorphic runtime types.
Invalid or not set
A pointer supports an additional nullptr value indicating that the pointer has not been set. For example, if you want to support all values of a given type (e.g. the entire range of integers) but also represent the notion that the user never input a value in the interface, that would be a case for using a std::unique_ptr<int>, because you could get whether the pointer is null or not as a way of indicating whether it was set (without having to throw away a valid value of integer just to use that specific value as an invalid, "sentinel" value denoting that it wasn't set).
Allowing modification
This can also be accomplished with references rather than pointers, but pointers are one way of doing this. If you use a regular value, then you are dealing with a copy of the original, and any modifications only affect that copy. If you use a pointer or a reference, you can make your modifications seen to the owner of the original instance. With a unique pointer, you can additionally be assured that no one else has a copy, so it is safe to modify without locking.
Polymorphic types
This can likewise be done with references, not just with pointers, but there are cases where due to semantics of ownership or allocation, you would want to use a pointer to do this... When it comes to user-defined types, it is possible to create a hierarchical "inheritance" relationship. If you want your code to operate on all variations of a given type, then you would need to use a pointer or reference to the base type. A common reason to use std::unique_ptr<> for something like this would be if the object is constructed through a factory where the class you are defining maintains ownership of the constructed object. For example:
class Airline {
public:
Airline(const AirplaneFactory& factory);
// ...
private:
// ...
void AddAirplaneToInventory();
// Can create many different type of airplanes, such as
// a Boeing747 or an Airbus320
const AirplaneFactory& airplane_factory_;
std::vector<std::unique_ptr<Airplane>> airplanes_;
};
// ...
void Airline::AddAirplaneToInventory() {
airplanes_.push_back(airplane_factory_.Create());
}
As you mentioned, virtual classes are one use case. Beyond that, here are two others:
Optional instances of objects. My class may delay instantiating an instance of the object. To do so, I need to use memory allocation but still want the benefits of RAII.
Integrating with C libraries or other libraries that love returning naked pointers. For example, OpenSSL returns pointers from many (poorly documented) methods, some of which you need to cleanup. Having a non-copyable pointer container is perfect for this case, since I can protect it as soon as it is returned.
A unique_ptr functions the same as a normal pointer except that you do not have to remember to free it (in fact it is simply a wrapper around a pointer). After you allocate the memory, you do not have to afterwards call delete on the pointer since the destructor on unique_ptr takes care of this for you.
Two things come to my mind:
You can use it as a generic exception-safe RAII wrapper. Any resource that has a "close" function can be wrapped with unique_ptr easily by using a custom deleter.
There are also times you might have to move a pointer around without knowing its lifetime explicitly. If the only constraint you know is uniqueness, then unique_ptr is an easy solution. You could almost always do manual memory management also in that case, but it is not automatically exception safe and you could forget to delete. Or the position you have to delete in your code could change. The unique_ptr solution could easily be more maintainable.

Go Programming - bypassing access privileges using pointers

Let's say I have the following hierarchy for my project:
fragment/fragment.go
main.go
And in the fragment.go I have the following code, with one getter and no setter:
package fragment
type Fragment struct {
number int64 // private variable - lower case
}
func (f *Fragment) GetNumber() *int64 {
return &f.number
}
And in the main.go I create a Fragment and try to change Fragment.number without a setter:
package main
import (
"fmt"
"myproject/fragment"
)
func main() {
f := new(fragment.Fragment)
fmt.Println(*f.GetNumber()) // prints 0
//f.number = 8 // error - number is private
p := f.GetNumber()
*p = 4 // works. Now f.number is 4
fmt.Println(*f.GetNumber()) // prints 4
}
So by using the pointer, I changed the private variable outside of the fragment package. I understand that in for example C, pointers help to avoid copying large struct/arrays and they are supposed to enable you to change whatever they're pointing to. But I don't quite understand how they are supposed to work with private variables.
So my questions are:
Shouldn't the private variables stay private, no matter how they are accessed?
How is this compared to other languages such as C++/Java? Is it the case there too, that private variables can be changed using pointers outside of the class?
My Background: I know a bit C/C++, rather fluent in Python and new to Go. I learn programming as a hobby so don't know much about technical things happening behind the scenes.
You're not bypassing any access privilegies. If you acquire a *T from any imported package then you can always mutate *T, ie. the pointee at whole, as in an assignment. The imported package designer controls what you can get from the package, so the access control is not yours.
The restriction to what's said above is for structured types (structs), where the previous still holds, but the finer granularity of access control to a particular field is controlled by the field's name case even when referred to by a pointer to the whole structure. The field name must be uppercase to be visible outside its package.
Wrt C++: I believe you can achieve the same with one of the dozens C++ pointer types. Not sure which one, though.
Wrt Java: No, Java has no pointers. Not really comparable to pointers in Go (C, C++, ...).

Changing function reference in Mach-o binary

I need to change to reference of a function in a mach-o binary to a custom function defined in my own dylib. The process I am now following is,
Replacing references to older functions to the new one. e.g _fopen to _mopen using sed.
I open the mach-o binary in MachOView to find the address of the entities I want to change. I then manually change the information in the binary using a hex editor.
Is there a way I can automate this process i.e write a program to read the symbols, and dynamic loading info and then change them in the executable. I was looking at the mach-o header files at /usr/include/mach-o but am not entire sure how to use them to get this information. Do there exist any libraries present - C or python which help do the same?
interesting question, I am trying to do something similar to static lib; see if this helps
varrunr - you can easily achieve most if not all of the functionality using DYLD's interposition. You create your own library, and declare your interposing functions, like so
// This is the expected interpose structure
typedef struct interpose_s {
void *new_func;
void *orig_func;
} interpose_t;
static const interpose_t interposing_functions[] \
__attribute__ ((section("__DATA, __interpose"))) = {
{ (void *)my_open, (void *) open }
};
.. and you just implement your open. In the interposing functions all references to the original will work - which makes this ideal for wrappers. And, you can insert your dylib forcefully using DYLD_INSERT_LIBRARIES (same principle as LD_PRELOAD on Linux).

ld linker question: the --whole-archive option

The only real use of the --whole-archive linker option that I have seen is in creating shared libraries from static ones. Recently I came across Makefile(s) which always use this option when linking with in house static libraries. This of course causes the executables to unnecessarily pull in unreferenced object code. My reaction to this was that this is plain wrong, am I missing something here ?
The second question I have has to do with something I read regarding the whole-archive option but couldn't quite parse. Something to the effect that --whole-archive option should be used while linking with a static library if the executable also links with a shared library which in turn has (in part) the same object code as the static library. That is the shared library and the static library have overlap in terms of object code. Using this option would force all symbols(regardless of use) to be resolved in the executable. This is supposed to avoid object code duplication. This is confusing, if a symbol is refereed in the program it must be resolved uniquely at link time, what is this business about duplication ? (Forgive me if this paragraph is not quite the epitome of clarity)
Thanks
There are legitimate uses of --whole-archive when linking executable with static libraries. One example is building C++ code, where global instances "register" themselves in their constructors (warning: untested code):
handlers.h
typedef void (*handler)(const char *data);
void register_handler(const char *protocol, handler h);
handler get_handler(const char *protocol);
handlers.cc (part of libhandlers.a)
typedef map<const char*, handler> HandlerMap;
HandlerMap m;
void register_handler(const char *protocol, handler h) {
m[protocol] = h;
}
handler get_handler(const char *protocol) {
HandlerMap::iterator it = m.find(protocol);
if (it == m.end()) return nullptr;
return it->second;
}
http.cc (part of libhttp.a)
#include <handlers.h>
class HttpHandler {
HttpHandler() { register_handler("http", &handle_http); }
static void handle_http(const char *) { /* whatever */ }
};
HttpHandler h; // registers itself with main!
main.cc
#include <handlers.h>
int main(int argc, char *argv[])
{
for (int i = 1; i < argc-1; i+= 2) {
handler h = get_handler(argv[i]);
if (h != nullptr) h(argv[i+1]);
}
}
Note that there are no symbols in http.cc that main.cc needs. If you link this as
g++ main.cc -lhttp -lhandlers
you will not get an http handler linked into the main executable, and will not be able to call handle_http(). Contrast this with what happens when you link as:
g++ main.cc -Wl,--whole-archive -lhttp -Wl,--no-whole-archive -lhandlers
The same "self registration" style is also possible in plain-C, e.g. with the __attribute__((constructor)) GNU extension.
Another legitimate use for --whole-archive is for toolkit developers to distribute libraries containing multiple features in a single static library. In this case, the provider has no idea what parts of the library will be used by the consumer and therefore must include everything.
An additional good scenario in which --whole-archive is well-used is when dealing with static libraries and incremental linking.
Let us suppose that:
libA implements the a() and b() functions.
Some portion of the program has to be linked against libA only, e.g. due to some function wrapping using --wrap (a classical example is malloc)
libC implements the c() functions and uses a()
the final program uses a() and c()
Incremental linking steps could be:
ld -r -o step1.o module1.o --wrap malloc --whole-archive -lA
ld -r -o step2.o step1.o module2.o --whole-archive -lC
cc step3.o module3.o -o program
Failing to insert --whole-archive would strip function c() which is anyhow used by program, preventing the correct compilation process.
Of course, this is a particular corner case in which incremental linking must be done to avoid wrapping all calls to malloc in all modules, but is a case which is successfully supported by --whole-archive.
I agree that using —whole-archive to build executables is probably not what you want (due to linking in unneeded code and creating bloated software). If they had a good reason to do so they should have documented it in the build system, as now you are left to guessing.
As to your second part of the question. If an executable links both a static library and a dynamic library that has (in part) the same object code as the static library then the —whole-archive will ensure that at link time the code from the static library is preferred. This is usually what you want when you do static linking.
Old query, but on your first question ("Why"), I've seen --whole-archive used for in-house libraries as well, primarily to sidestep circular references between those libraries. It tends to hide poor architecture of the libraries, so I'd not recommend it. However it's a fast way of getting a quick trial working.
For your second query, if the same symbol was present in a shared object and a static library, the linker will satisfy the reference with whichever library it meets first.
If the shared library and static library have an exact sharing of code, this may all just work. But where the shared library and the static library have different implementations of the same symbols, your program will still compile but will behave differently based on the order of libraries.
Forcing all symbols to be loaded from the static library is one way of removing confusion as to what is loaded from where. But in general this sounds like solving the wrong problem; you mostly won't want the same symbols in different libraries.

Resources