Qt Application Perimeter - qt

I wonder what is the bounds of Qt's perimeter. I know for exemple that it can specify types (such as qint or QString), and I know it cannot get system informations such as CPU Usage or Memory Usage.
My question is about the limits of Qt.
Is it correct that Qt can only interact with what is inside the project but not with what is outside (I mean system-related) ?

You can get information about operating system with QSysInfo class, if you are looking for this. This is one example, I am sure there are other helper classes. I think you should use other libraries for information like CPU usage etc, see here and also this question.

QT is nothing more/nothing less then a GUI C++ cross-platform framework. It's doesn't really have a perimeter, it has certain cross-platform functions implemented (like widgets/frames/controls a lot of other things). And within it's own functionality it provides (As being mentioned above) QSysInfo class, but you are free to add any OS dependent (if you target your application for particular platform) or cross-platform solutions for whatever tasks you need - hardware info/OS monitoring/etc..

Related

What can do OpenGL extensions that Qt+OpenGL can't? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Since Qt can handle in normal way OpenGL, it is cross-platform, can handle mouse, keyboard, gamepad etc. What are the disadvantages of using Qt with OpenGL instead using OpenGL with extensions?
What are the disadvantages of using Qt with OpenGL instead using OpenGL with extensions?
Your question is malformed. Nothing stops you from using Qt with OpenGL and with OpenGL extensions.
You can use Qt to manage the OpenGL window, while using direct OpenGL commands with extension to render. You are not required to use Qt's OpenGL interface to render in an OpenGL window.
Qt does not provide "additional opengl functionality." It cannot provide "additional opengl functionality." It isn't part of OpenGL, so it can't make OpenGL features magically appear.
There are no OpenGL extensions for mouse, keyboard, gamepad, or any of the other things Qt handles. Qt's windowing functionality and OpenGL extensions are two completely different things. And they are completely orthogonal; nothing stops you from using Qt+OpenGL and OpenGL extensions at the same time.
Well, unless you stop yourself. See, Qt has this OpenGL abstraction layer. This is a set of wrapper classes around OpenGL: QtOpenGLShaderProgram, QtOpenGLVertexArrayObject, and the like. If you use that, you don't directly make OpenGL calls; you make Qt calls that make OpenGL calls for you.
If your question is whether to use Qt+OpenGL directly vs. using Qt's OpenGL abstraction layer, that's a different matter.
The first problem is that Qt's abstraction layer is bound to OpenGL ES 2.0. While it occasionally offers features that ES 2.0 can't do, it is primarily intended as a class-ified implementation of ES 2.0. So by using ES 2.0, you're effectively giving up using lots of desktop OpenGL features.
Not "extensions"; core features.
For example, you cannot use integers for vertex attributes with Qt's abstraction. The QtOpenGLShaderProgram class doesn't allow it. All of its setAttributeBuffer calls assume that you're calling glVertexAttribPointer. It has no mechanism for calling glVertexAttribIPointer. And that has been core desktop OpenGL for nearly a decade.
Note that this is just one feature. Other things Qt doesn't have wrapper class support that are part of core desktop OpenGL (this is not a comprehensive list):
Separate programs
Sampler objects
Separate attribute formats
These are not bleeding-edge hardware features; most of them have been around for half a decade.
QtOpenGLFunctions is similarly limited to OpenGL ES versions. That leaves plenty of non-extension desktop GL stuff on the table that cannot be used through their abstraction.
Also, because Qt's abstraction is around ES 2.0, it doesn't care about core OpenGL contexts. For example, it still has non-buffered vertex attributes (setAttributeArray). That's not legal in core OpenGL, and again hasn't been legal for nearly a decade.
So if you want to actually use core desktop OpenGL functionality, the Qt abstraction layer is out.
Then, there are places where Qt's abstraction just doesn't match how OpenGL works.
For example (and this is a personal pet-peeve of mine), QtOpenGLBufferObject is typed. That is, the binding type is part of the object. This is not how buffer objects work!
OpenGL buffer objects aren't typed. It is perfectly legal to perform an asynchronous glReadPixels into a buffer, then bind the same buffer for use as vertex data. That's not possible with Qt's class abstraction. And it's not like this is something specific to desktop GL; OpenGL ES works the same way.
Similarly, for reasons best known to themselves, they put the vertex attribute specification functions (the equivalent to glVertexAttribPointer) in QtOpenGLShaderProgram. Why are they there? While vertex attributes do have an indirect connection to a program, they're not a direct part of the conceptual program interface. OpenGL doesn't work like that.
So those are the biggest problems with Qt's abstraction layer. If you can live within those restrictions, feel free to use it. For people making desktop OpenGL applications, they may be too restrictive.
You (OP) wrote in a comment to a different answer:
Extensions provide functionality that the core of OpenGL doesn't provide when Qt itself won't be created for providing additional opengl functionality. It was like a addon for users.
I think you completely misunderstood what OpenGL extensions are and how they work. OpenGL extensions allow to add new features to OpenGL (which might actually be included into a later core version) and/or to expose vendor specific functionality like access to special GPU functions present only for a very specific narrow range of GPUs.
Qt on the other hand offers a framework for applications that deals with operating system specifics in a portable way. Qt and OpenGL are completely orthogonal to each other and nothing, that OpenGL extensions do in any way resembles what Qt does. Qt has a OpenGL integration module, that – among other things – will also load OpenGL extensions if you ask it to; but that doesn't make it a "Qt" thing.
I think you are missing the point. OpenGL (including its extensions that provide some perks that the plain OpenGL does not) is "just" a graphics library intended for rendering 2D and 3D. Qt on the other hand is much more. OpenGL in itself doesn't provide anything but rendering. You can't even create a window (as what you are used to in Windows/Linux) with it. In order to add any sort of handling of the user's input you need an extra layer which Qt (and many other similar frameworks) provides - integration into the window manager of the OS, handling of mouse and keyboard events etc. Qt does also support the OpenGL extensions so you don't have to throw these away if you want to use them.
Whether you need Qt for your OpenGL (with or without the extensions your system supports) tasks or not is something you need to decide for yourself. Qt does offer many nice features that will help you make your OpenGL interaction great however it is a huge overhead and depending on your target system you may have to use a smaller framework with a smaller memory (incl. persistent storage for all the library files) footprint and CPU usage. Other popular choices are GLFW, freeglut and SDL/SDL2 all of which provide at least the basics (window creation and mouse/keyboard handling) to get your application up and running.

Controlling a Specific Application's Volume Level

Can NAudio be used for setting the volume level of a specific application?
{ Over Windows 7 }
I've found this thread, referring to the issue, suggesting to self-implement the required solution over WASAPI, but I'd prefer a simpler solution, optimally - using NAudio wrappers for this, if there are such.
I have also found this WASAPI-based solution, which (for me, over 32-bit Windows 7 Professional) does not enumerate all audio-playing applications, and is hence - not applicable.
What I'm actually trying to accomplish: I'm using a commercial application playing a long sequence of audio files, of various qualities and audio-levels. I'd like to apply AGC (Automatic Gain Control, i.e. volume-level normalization) to that application, to at least "blur" (if not eliminate altogether) the difference in volume-level between played tracks.
As a 1st phase, I could assume that this application is the only one producing audio on system, and handle only Windows' main audio-path samples, but I do not know how to accomplish that either.
Can NAudio interfere with the audio-path, modifying audio-samples (i.e. amplifying them) before they reach the speakers jack?
Please note that simply changing Windows main volume gauge won't do the trick, as it won't be reflected in the amplitude of the samples captured by NAudio/WASAPI Loopback.
NAudio would be the preferred approach, but is NOT a must.
NAudio does have wrappers for many parts of the Windows Core Audio API, but does not include the [IAudioSessionEnumerator][1] that Roman mentions in the answer you linked to. It seems this part of the API was introduced with Windows 7.
So I'm afraid NAudio can't help you here, and you'd need to port Roman's code to C#, which would require you to create interop wrappers for IAudioSessionEnumerator and related interfaces such as IAudioSessionManager and IAudioSessionControl.

ANSI C compliant or C++ cross-platform GUI library?

I'm searching for a simple library for creating GUI that can have:
a portable codebase across different compilers and OS
can be easily extended to a new platform if that platform it's not natively supported
are real library and not just a collection of #define, tools and other un-portable and non-standard things.
So far the "best" match is QT that is just the opposite of each one of this 3 points, especially the 3rd one (moc compiler and #defines ... ).
I also do not need data structures and 10000 extra functions, i just need to code a portable GUI, hipotetically i don't even need a signal slot library included because I can handle signals with third part libraries.
If there is no such lib available can you point me to a resource where I can learn about the OS specific basics about Widgets and Windows ?
I never used it, but I would suggest looking into IUP
From what I read this would fit the bill quite well. The project is also quite active. Though it is probably not too pretty.

Porting Borland C++ Builder to Qt

I have to port a project from Borland C++ Builder 5.0 under Windows XP to Qt 4.7.1 using g++ under Windows 7/mingw. The libraries and command-line utilities are done, and now I have to tackle the GUI applications, which use Borland VCL.
Can anybody recommend any tools or libraries to make this task easier?
Does anybody have any experience of this?
Edited to add: Well, I took the bull by the horns, and implemented the GUI from scratch. And I have to say, the commentators were right: I can't see any way of using the existing Borland GUI to ease the process.
There are several big differences between VCL and Qt that will make an automatic conversion process quite difficult.
Qt uses signals and slots and inheritance where VCL uses events.
VCL components use absolute coordinates and Qt uses layouts. Of course, you could use absolute coordinates also with Qt, but the GUIs would be quite awful then.
VCL's TListBox and TTreeView classes are quite different from Qt's View and Model classes (although you could use QListWidget and QTreeWidget instead).
I guess it is much faster to design totally new GUIs with Qt than to create even a mediocre VCL-to-Qt converter. And the code will be much easier to maintain. I suggest that you take one VCL form of medium complexity and recreate that with Qt. After that you can make an estimation of the total recreation work. Also you will have a better understanding about the feasibility of a conversion tool, which you most probably would need to make by yourself.
Someone has written a tool to convert dfm's to qt ui files:
http://sourceforge.net/projects/dfm2qt4ui/
Its has a few small bugs but it can save several hours of time porting form designs. In some cases redesigning specific forms is preferable - but in many cases, having labels and roughly equivalent controls positioned for you saves a lot of point-and-click action.
I agree with the current consensus that automatic conversion from VCL to QT is not a good idea because the concept behind both is very different, and you are much better off learning "the QT way" and using that from the start.
However there is one major step that nobody has yet mentioned: refactoring! Before starting, make sure you refactor the original forms to remove as much business logic as possible and leave only what is really GUI code. It depends on how good your architecture already is of course, but the VCL designer tends to encourage putting as much as possible in forms (even going as far as having invisible "data forms" with non-visual components!), so you often find a lot of stuff in the form that shouldn't be there.

mpi under the hood

I need to deliver a presentation on programming in MPI. I need to add a segment on how MPI works under the hood. For Example What happens when I call MPI_Init?
Do you know of any good source from where I can learn these details?
The MPI Spec contains the description of the knobs, sliders, and displays that are on the outside of the "black box" of each API.
The interior details of the black boxes will be implementation dependent...and will also depend on the interconnect (e.g. TCP, IBV, DAPL, etc), the OS (e.g. is the implementation using LSB, or native libraries, etc), and on many other factors to a lesser degree (e.g. message size thresholds will trigger different code paths, and so on). Using "strace" and "ltrace" on the a.out may provide some insight into the actual goings on inside the blackbox.
The best recommendation is to pick an open source implementation and examine the code to determine the internal details.
MPI is a specification, not a particular implementation. The observable behavior is given in the MPI spec. How it works under the hood depends on the particular implementation. If you'd like to take a look at an example implementation, you might be interested in looking at MPICH2 and browsing their source code.
Complement your study of the source code of an implementation of MPI with consideration of how you would implement MPI_Init on your platform of choice. MPI sits on top of already available O/S functionality. I don't mean to suggest that you can figure out how a particular version of MPI is implemented by this approach, but to suggest that you can learn better what is going on under the hood by tackling the problem from another angle.
MPI is only a spec. MPI spec is implemented by various groups and organizations. You will want to pick one implementation, say, MPICH, and you can find their design documentation. That will tell you how the MPI spec is implemented by that group.
If you just want to describe what happens when an application written in MPI is started, you can read about MPI and MPI programming. I highly recommend http://www.citutor.org

Resources