The problem mainly is determined in the title. I tried out the Qt's example (2dpainting) and noticed, that the same code consumes more CPU power if I try to draw on QGLWidget and less if I try to draw simply on QWidget. I thought that the QGLWidget should be faster. And one more interesting phenomenon: In QGLWidget the antialiasing hint seems to be ignored.
OpenGL version: 3.3.0
So why is that?
Firstly, note this text at the bottom of the documentation that you link to:
The example shows the same painting operations performed at the same
time in a Widget and a GLWidget. The quality and speed of rendering in
the GLWidget depends on the level of support for multisampling and
hardware acceleration that your system's OpenGL driver provides. If
support for either of these is lacking, the driver may fall back on a
software renderer that may trade quality for speed.
Putting that aside, hardware rendering is not always guaranteed to be faster than software rendering; it all depends upon what the renderer is being asked to do.
An example of where software can exceed hardware is if the goal of the item being rendered is constantly changing. So, if you have a drawing program that draws a line being created by the mouse being constantly moved and it is implemented by adding points to a painter path that is drawn every frame, a hardware renderer will be subject to constant pipeline stalls as new points are added to the painter path. Setting up the graphics pipeline from a stall takes time, which is not something a software renderer has to deal with.
In the 2dPainting example you ask about, the helper class, which performs the paint calls, is doing a lot of unnecessary work; saving the painter state; setting the pen / brush; rotating the painter; restoring the brush. All of this is a bigger overhead in hardware than software. To really see hardware rendering outperform software, pre-calculating the objects' positions outside of the render loop (paint function) and then doing nothing put actually rendering in the paint function is likely to display a noticeable difference here.
Finally, regarding anti-aliasing, the documentation that you linked to states: "the QGLWidget will also use anti-aliasing if the required extensions are supported by your system's OpenGL driver"
Related
I am trying to use an external to Qt OpenGL context using a window handle that comes from Qt. The setup is:
A QMainWindow - contains various widgets including a QWebEngineView to make some web content available (in my case it's Leaflet for render and interact with OpenStreetMaps tiles)
Panda3D engine - rendered on top of my Qt application using a window handle of the central QWidget.
The setup works...when Panda3D is set to DirectX9 (aka pandadx9). When I switch the pipeline to OpenGL (aka pandagl) I get a black screen (Panda3D's window) and a very glitch OpenGL content (Qt). The reason is simple yet beyond my ability to fix it - QWebEngineView uses OpenGL. Somehow there is a conflict on a OpenGL context level between the engine and Qt. I am looking for a way to resolve this without removing the direct interaction with Panda3D's window (in my case using ShowBase) since the engine already offers a lot in terms of features for handling mouse events that I would otherwise be forced to reimplement in Qt and pass down to the engine. In addition I am not sure if I can make Panda3D render its scene as an FBO and how to load it in a - let's say - QOpenGLWidget. Activating shared OpenGL context before initializing QApplication allows multiple OpenGL widgets to render OpenGL content.
So far I have experimented to integrate Panda3D with Qt in two ways:
run two event loops in parallel - start the Panda3D engine in a child process and then using a pipe communicate with it
run a single event loop - use the event loop of Panda3D engine to also handle the main event loop of Qt by adding a task to the engine's task manager to run QApplication::processEvents() on every cycle
In both cases I am handing over a window Id (QWidget::winId()) as the parent window of Panda3D.
CASE 1 - parallel processes, separated event loops
This resolution comes with a lot of overhead. All the communication between the Qt content (running in the parent process) needs to be sent (via pipe hence involving IPC) to the engine (running in the child process). This adds a lot of code complexity and in case of my logging (using Python's logging module with a custom logging handler that writes records into an SQLite3 database) introduces a whole lot of issues. Concurrent write access to a file between processes is a tricky thing in general and I'm definitely not an expert. This case however does not exhibit the behaviour I'm describing below. In this case however the issue with OpenGL is not present!
CASE 2 - single process, single event loop
In my opinion this is a more elegant and is what I would like to go with (if possible). An example can be found here. I use the engine's main loop to process Qt's main loop. This is due to the fact that a 3D game engine usually would need to deal with far more events for a shorter period of time (processing rendering, audio, video, filesystem access, physics and so on) then a standard Qt GUI. This is also the recommended way as described in the official documentation of the engine. The other way around (Qt's main loop handling Panda3D's) is also possible. Imho neither have anything to do with my issue namely the moment I add anything Qt-ish that uses OpenGL, the problem I've described above occurs. On Windows this is not a huge deal breaker since I can also use DirectX for the engine, while Qt does it's OpenGL thing. On Linux it is not possible (without something like wine). In addition I want to use exclusively OpenGL including GLSL.
Here is a visual representation of case 1 and case 2 (but with mixed DX9 and OpenGL):
And below there are two visual representations of case 2 with only OpenGL:
While Panda3D offers CPU rendering (aka p3tinydisplay), QWebEngineView does not. Falling back to CPU rendering on the engine side is not an option considering the huge amount of polygons I have to render not to mention the fact that I can do something more useful with the CPU (e.g. processing the physics).
Last but not least I have seen a third integration attempt, which I quickly discarded - rendering the scene as an image to RAM, reading it in Qt, generating a QPixmap from it and painting on top of a QLabel. Needless to say this is a no go for my scenario due to the heavy hit on performance among others.
Any ideas how to tackle this?
I don't think event loops have to do with anything here; the problem is that by default child windows get the same device-context (DC) as the parent. In your case that's a problem because two different components (Qt framework, and Panda3D engine) try to ChoosePixelFormat and initialize OpenGL context twice on the same DC, which is not supported.
The proper solution is to create the Panda3D engine QWidget from a QWindow with the Qt::MSWindowsOwnDC style, which corresponds to the CS_OWNDC window-class style. Normally QWidget doesn't create any window at all -- but is rather implemented entirely within the Qt framework by drawing itself on the parent window.
If this was a texture that I created, I'd simply make its internalFormat be GL_SRGB. But I'm passing a Qt Quick Item foo into my custom QQuickFramebufferObject GL code, where I take foo->textureProvider()->texture() and use that texture to render.
So can I make the filtering of the texture (when bilinearly sampling it) be gamma-correct?
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
I've searched google for hooks Qt may provide to configure this, but found nothing except QQuickTextureFactory which, however, does not solve my problem, at least AFAICS.
I need to support OpenGL ES 2.0.
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Well, from the filtered result color, there is simply no way to get back the original colors used as input, even if you know the interpolation factors.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
A more efficient variation of this strategy would be creating a second view onto the texture data, with an SRGB format (see GL_ARB_texture_view extension, core since GL 4.3), which completely avoids the copy and additional RAM usage.
i wan't to use Native OpenGL in the paint function of my widgets(QPainter), to improve performance.
i saw that there is function QPainter::begin/endNativePainting(), that can help me.
but i can't find examples for that...
i wanted to know if those functions are low cost, or evry use of them reduce performance?
2.can i define beginNativePainting() and endNativePainting(), in general for all the widgets i use, instead of using that in every paint function i have.
tnx for any help....
There is some basic example code right in the documentation: http://doc.qt.io/qt-4.8/qpainter.html#beginNativePainting
The functions themselves should be fairly low-cost, but calling them might still cause a noticeably overhead, because Qt has to flush its internal painting queue on the beginNativePainting() call and probably has to assume that everything is changed as soon as endNativePainting() is called.
For the second part I am not sure if I understand what you are aiming at. Basically if you have a QPainter object, you can call beginNativePainting() once. But you have to match it with an endNativePainting() call. So the usual place would be the paint() method.
Qt is using a range of OpenGL functionalities to implement its 2D painting, including custom shaders and various frame buffers. It puts OpenGL into a pretty messy state.
beginNativePainting / endNativePainting are there to allow Qt's drawing engine to save this context and retrieve it once the user is done drawing.
It would have been nice to have the xxxNativePainting methods do the contrary (i.e. automatically save and restore user configuration of OpenGL), but since Qt allows to call OpenGL primitives directly, saving the global state is nigh impossible without tons of code and potential serious performance hit.
Instead, these methods simply save Qt's internal OpenGL state and, rather than having user code start in a configuration that would be meaningless anyway (and likely to change with each new Qt release), reset OpenGL to a "neutral" state.
It means that, inside a begin/end section, you will start with a clean slate: no shader linked, no vertex array, most of global parameters reset, etc.
Contrary to a simple QGLWidget / PaintGL scenario where you can afford to setup the global OpenGL state once and for all and simply call the rendering primitives each frame, you will have to restore pretty much everything just after the call to beginNativePainting (link/bind your shaders, set global parameters, select and enable various buffers, etc).
It also means that you should use native painting sparringly. Having each single widget do custom painting might soon bring your rendering to its knees.
The new Animation Framework in Qt 4.6+ is based on QTimeLine which has 'void setUpdateInterval(int interval)' public function. Based on QTimeLine as well, QGraphicsItemAnimation could access this function, but the new animation framework classes (e.g. QPropertyAnimation) can't! Is the animation framework locked to the about 60 updates per second which corresponds to a pixel-by-pixel transition of 60 pixels on screen per second only (for QPropertyAnimation animating position property) or is there a way to increase this without reimplementing everything?
I think there are some hardware limitations and also some limits with how the OS/Qt handles some of the painting. The Qt Main Loop has something to do with it also.
In my experience, double buffering and repainting only the regions that need to be repainted will give you the smoother animations that you are looking for. Also make sure that your graphics are close to the size that you are actually painting them. Increasing the interval of refresh won't help most monitors because they don't refresh faster than 60 Hz.
Here is a link that may be helpful.
Qt works hard to optimize and get the graphics to look nice on a lot of platforms, and I know that as they are preparing for Qt 5, that there are some more changes to how the raster engine works.
I've also seen a demonstration similar to what is discussed here in person where they showed the frames per seconds of refreshes they could get with painting tiles in a game. Here is a link to a video discussing it. It goes into tweaking the performance of Qt for a particular game implementation, and what helps and what doesn't.
Here's my solution for Qt 4.8:
// You need qt source code to access QUnifiedTimer for QAnimationDriver
// Alternatively, copy'n'paste only the declaration for QUnifiedTimer class.
#include <qt/src/corelib/animation/qabstractanimation_p.h>
...
// In the animation thread, set the update timing interval.
// The smaller the interval, the more updates and more CPU consumption.
int animationTimingInterval = update_interval_in_msecs_u_want;
QUnifiedTimer::instance()->setTimingInterval(animationTimingInterval);
I am trying to develop am map application for scientific purposes at my university. Therefor I got access to a lot of tiles (256x256). I can access them and save them to an QImage in a seperate QThread. My Problem is, how can I actually manage to load the QImage into a texture within the seperate QThread (not the GUI main thread)? Or even better give me a Tipp how to approach this problem.
I though about multithreaded OpenGL but I also require OpenGL picking and I did not fall over anything usefull for that.#
Point me to any usefully example code if you feel like, I am thankfull for everything that compiles on Linux :)
Note1: I am using event based rendering, so only if the scene changes it gets redrawn.
Note2: OSG is NOT an option, it's far to heavy for that purpose, a leightweight approach is needed.
Note3: The Application is entirely written in C++
Thanks for any reply.
P.S. be patient, I am not that adavanced as this Topic may (or may not) suggest.
OpenGL is not thread-safe. You can only use one GL context in one thread at a time. Depending on OS you also have to explicitely give up on the context handle in one thread to use it in another.
You cannot speed up the texture loading by threading given that the bottleneck here is the bandwidth to the graphics card.
Let your delivery thread(s) that load the tiles fill up a ring buffer. The GL thread feeds from the ring buffer. With two mutexes it is easy to control the ring buffer to make this thread-safe operation.
That would be my suggestion.
Two tricks I use to speed things up:
pixel buffer objects: map GPU memory so the loading thread can write directly to gpu;
sync objects: with a sync object I know when the texture is really ready to be used (glTexImage2D with PBO is async so there is no guarantee the texture is ready to be binded, ie, when binding a texture, it blocks if DMA didn't finish updating texture data)