Calling a DBus serivce from a pure QML plasmoid - qt

Could anybody suggest me a tutorial or example how to call a custom DBus service (on the system bus) from a pure QML KDE plasmoid, without an intermediate layer? Is it possible at all?
The standard plasmoids like org.kde.plasma.battery or org.kde.kscreen use the DataSource approach, which likely requires a shared library or other code in C++. Other plasmoids invoke wrappers (e.g. in python) to talk with a DBus service.
So the question: is it possible to call a given D-Bus service directly from QML/JS without an intermediate layer, at most by importing some (standard) KDE/QT/plasma library?

Related

Call remote test library constructor as keyword from Robot Framework

Is it possible to call Test Library constructor from Robot Framework?
Using Remote library interface (NRobot.Server) to connect from RF to Test Library (implemented in C#).
Currently its exposing all public methods implemented under Test Library except constructors.
There are multiple Test Libraries in our project where some functionality implemented as part of constructors.
Hence need a way to call constructor as a test step to execute certain functionality whenever required.
If not possible then may need to move functionality from constructors to new public methods. But want to avoid that if possible.
Thanks in advance...
In short - no.
When calling a remote library, you're actually just the client in an XML-RPC comm protocol; it is the server's responsibility to have the library instantiated, so it (the very same library) can process your instructions and act as needed. Thus normally the library is already instantiated when you call it from your RF code - too late to invoke its constructor.
Naturally, this can be implemented differently - for the remote library server to instantiate the target library on a (special) call, and thus you'll to be able to provide constructor arguments, but that is library design/code change required in it.
This is in contrast of using local libraries, where they are instantiated in your local interpreter, on their import.

How can I call in a sync manner C++ methods from JS (QtWebEngine)

Is there a way I can call my C++ methods from JS in a sync (instead of async) manner? (I'm using QtWebChannel).
If there is not, can I at least use async/await without transpiling?
I don't think is possible to use sync manners. The doc said "Note that all communication between the HTML client and the QML/C++ server is asynchronous".
-> https://doc.qt.io/qt-5.9/qtwebchannel-javascript.html

Is retrofit of any use if I am already using okhttp3, Moshi and Rxjava in my project?

I have done some R&D on above libraries and have used some in my project.I am using Moshi for json parsing, OkHttp3 library for http connections and Rxjava for asynchronous and event based programming in my project. Now when I looked at retrofit, I felt its of no use as I have already used above main components of retrofit myself.
Just want to know the ideas of the people whether I am thinking in right direction or not.
Edit: From my point of view, Retrofit only provides clean interface of http client where one can customize requests,headers etc with annotations.
This is a good choice of libraries from my point of view. The first three are developed by Square and they work very well together. However the main difference is that each library works on a different layer.
OkHttp: transport layer. Deals with http protocol. Performs networking.
Moshi: Json parser. Transforms bytes from OkHttp into a Java objects.
Retrofit: Rest layer. Transforms HTTP logic (status codes), into REST logic.
RxJava: provides tools to create reactive code, instead of imperative code.

Some queries on QSettings, qmlRegisterType() and setContextProperty

I will try explaining my confusion through the application I am currently developing.
My application (based on Qt5.1 + Qt Quick Controls) interacts with Facebook API to manage a Facebook Page. I am trying to keep the QML code (for UI) as separate as possible from the C++ core.
Now, an OAuth2 implementation is required to be able to interact with Facebook API. For that, I have a C++ OAuth2 class, the constructor of which has the following signature:
OAuth2::OAuth2(QString appId, QString redirectUrl, QStringList permissions);
Now, as the OAuth process requires a browser, I have also implemented an OAuthBrowser.qml, which uses OAuth2 to complete an authorization.
I have the following options to expose OAuth2 class to OAuth2Browser:
Instantiate OAuth2 and use setContextProperty() to expose the instance to OAuth2Browser. However, this means my C++ code has to deal with the UI code. The more baffling issue is that OAuth2Browser is a secondary window. When a user clicks on a "Authorize" window on the MainWindow, then an
AppController C++ object (connected to MainWindow) will launch the OAuth2Browser window. Thus, the instantiation code of OAuth2Browser would go deep down inside a AppController method. It would have been good if only main.cpp had to deal with the window creation.
Use qmlRegisterType(). In this case, I can't pass parameters to the constructor. So, I will have to implement an init() method that would initialize an OAuth2 object. Then, I would call this init() method in OAuth2Browser's Component.onCompleted() method.However, in this approach, I will have to expose QSettings to the UI code - QML window, so that the required parameters to init() method can be retrieved. I have huge skepticism on whether directly exposing application settings to QML UI is a good idea.
Implicitly use QSettings within the OAuth2 constructor. This way, I won't have to pass any parameters, and I would be able to use qmlRegisterType(). However, this means I am doing some magic stuff "behind the curtains". Instead of explicitly passing QSettings instance, I am using it wherever the hell I want to, thus hiding the initialization detail from public API.
An alternative based on the 3rd option was advised on IRC - use an initFromSettings() type of method to initialize an instance if no parameter is passed to the constructor. That way, the initialization is not hidden, and initFromSettings() can confidently use QSettings within itself. Now, I can happily use qmlRegisterType() to instantiate OAuth2 in QML.
So, what is the better approach?
Also,
Is exposing QSettings directly to QML UI a good idea?
I personally prefer qmlRegisterType() to setContextProperty() - that way, the lifetime of a registered class's instance is maintained solely by QML. However, the former is less likely to be used due to the lack of support of parameterized constructors, unless some form of init() is used
explicitly for initialization. Is that a good design?
I apologise in advance for an excruciatingly long post. But I thought it best to ask here.
It's difficult to fully follow your post since it's so long and information dense. Here are my suggestions for what they might be worth.
You want to know what is a good design but you don't specify your goals. You can't really rate something for how well it achieves goals unless you can enumerate the goals.
You're dealing with facebook's api. My crystal ball says change is something you will need to deal with. Therefore putting all the tools into qml may make you better able to respond to change. You can respond to change by rewriting javascript in a qml file instead of a recompile (hopefully). Use properties and the signal/slot design and it should be flexible enough to get the job done. Performance doesn't seem to be an issue.
I would create a settings object that exposes the stuff you want to store. Perhaps using the model/view architecture Qt provides already. The underlying storage, xml file, database, QSettings registry isn't important. You can offer a grid/list to allow users to update their settings if necessary.
Put together oauth and browser tools as objects that will let you script the behavior of the app in qml.
These tools to expose c++ objects might be something excellent to share with the community as well.
Good luck!

What is IDL?

What is meant by IDL? I have googled it, and found out it stands for Interface Definition Language, which is used for interface definition for components. But, in practice, what is the purpose of IDL? Does Microsoft use it?
An interface definition language (IDL) is used to set up communications between clients and servers in remote procedure calls (RPC). There have been many variations of this such as Sun RPC, ONC RPC, DCE RPC and so on.
Basically, you use an IDL to specify the interface between client and server so that the RPC mechanism can create the code stubs required to call functions across the network.
RPC needs to create stub functions for the client and a server, using the IDL information. It's very similar to a function prototype in C but the end result is slightly different, such as in the following graphic:
+----------------+
| Client |
| +----------+ | +---------------+
| | main | | | Server |
| |----------| | | +----------+ |
| | stub_cli |----(comms)--->| stub_svr | |
| +----------+ | | |----------| |
+----------------+ | | function | |
| +----------+ |
+---------------+
In this example, instead of calling function in the same program, main calls a client stub function (with the same prototype as function) which is responsible for packaging up the information and getting it across the wire to another process, via the comms channel.
This can be the same machine or a different machine, it doesn't really matter - one of the advantages of RPC is to be able to move servers around at will.
In the server, there's a 'listener' process that will receive that information and pass it to the server. The server's stub receives the information, unpacks it and passes it to the real function.
The real function then does what it needs to and returns to the server stub which can package up the return information (both return code and any [out] or [in,out] variables) and pass it back to the client stub.
The client stub then unpacks that and passes it back to main.
The actual details may differ a little but that explanation should be good enough for a conceptual overview.
The actual IDL may look like:
[ uuid(f9f6be21-fd32-5577-8f2d-0800132bd567),
version(0),
endpoint("ncadg_ip_udp:[1234]", "dds:[19]")
] interface function_iface {
[idempotent] void function(
[in] int handle,
[out] int *status
);
}
All that information at the top (for example, uuid or endpoint) is basically networking information used for connecting client and server. The "meat" of it is inside the interface section where the prototype is shown. This allows the IDL compiler to build the function client and server stub functions for compiling and linking with your client and server code to get RPC working.
Microsoft does use IDL (I think they have a MIDL compiler) for COM stuff. I've also used third party products with MS operating systems, both DCE and ONC RPC.
There is also Interactive Data Language which I had a job using for scientific data analysis, but perhaps from the context it's clear to you that's not what this IDL stands for.
IDL is an acronym for Interface Definition Language of which there are several variations depending on the vendor or standard group that defined the language. The goal of an IDL is to describe the interface for some service so that clients wanting to use the service will know what methods and properties, the interface, the service provides. IDL is normally used with binary interfaces and the IDL language file describes the data types used in the binary interface.
There are several different standards for binary components, typically COTS or Commercial Off The Shelf, and how a client communicates with the binary component can vary though traditionally some version of Remote Procedure Call or RPC is used. Two such standards are the Microsoft Common Object Model or COM standard and the Common Object Request Broker or CORBA standard. There are other standards for components such as Firefox plugins or plugins for other applications such as Visual Studio itself however these do not necessarily use some form of Interface Description Language using instead some kind of a Software Development Kit or SDK with standardized and well known interfaces to an API.
What an IDL allows is a greater degree of flexibility in being able to create components offering services of various kinds which, due to their binary nature, may be used with a variety of different programming languages and a variety of different environments.
Microsoft uses a dialect of IDL with COM objects and the Microsoft IDL is not the same as CORBA IDL though there are similarities since they share common language roots. The IDL file contains the description of the interfaces supported by a COM object. COM allows for the creation of In Process services (may use RPC, or direct DLL calls) or Out of Process services (uses RPC). The idea behind COM is that the client only needs to know the identifier for the component along with the interface to be able to use it. The client requests the COM object then requests a class object from the COM object's factory which supports the interface the client wants to use and then uses the COM object through that interface.
Microsoft provides the MIDL compiler which processes an IDL file to generate the type library, providing information to users of the COM object about the interface, and the necessary stubs for marshaling data across the interface between client and service.
Marshaling of data basically means the stub takes the data provided by the client, packages it up and sends it to the service which performs some action and sends data back. This sending and receiving of data may be through some RPC service or through direct DLL function calls. The response from the service is translated into a form suitable for the client and then provided to the client. So basically the marshaling functionality is an adapter (see the adapter design pattern) or bridge (see the bridge design pattern) between the client and the service.
Visual Studio, my experience is with C++, contains a number of wizards that can be used for generating an example so that you can play with this. If you are interested you can create a work space and then in the work space you can create an ATL project to generate a control and then a simple MFC dialog project to test it out. Using ATL for your COM control hides quite a few details that you can investigate later and the simple MFC dialog project provides an easy way to create a container. You can also use the ActiveX Control Test Container tool, available in Visual Studio, to do preliminary testing and to see how methods and properties work.
There are also a number of example projects on web sites such as codeproject.com. For instance here is one using C to expose all the ugly plumbing behind COM and here is one using C++ without ATL.
It's a language that has been used in the COM era to define interfaces in a (supposedly) language-neutral fashion.
It defines the interface to be used for communication with an exposed service in another application.
If you use SOAP you'll know about WSDL. The WSDL is another form of an IDL. IDL usually refers to Microsoft COM or CORBA IDL.
IDL is vital in 2 cases.
1. To create proxy/stub dlls for exe servers.
2. To create Type library for automation servers.
There is very good article for basics of IDL at link
To study IDL, it is better to read compilers's own idl header files which are given include subdirectory of VC++ package.

Resources