Is openGl usefull to display a map? - qt

I have to display a tiled map to display the result of a simulation
One can zoom/unzoom on the map, so if the zoom is far, there will be much more tiles displayed.
I am using QGraphicsPixmapItem to add the tiles to a QGraphicsScene.
I wonder whether openGl would be able to speed things up

I am using QGraphicsPixmapItem to add the tiles to a QGraphicsScene.
QGraphicsScene already uses methods like spatial subdivision (Kd trees) to determine which parts of a scene are visible and which not. In addition QGraphicsScene can use OpenGL as a rendering backend.
I strongly suggest you stick with QGraphicsScene, you'll hardly get more efficient than this, especially considering your next question:
I wonder whether openGl would be able to speed things up
Not if used naively. OpenGL is not a scene graph. I can't cull away and not issue drawing commands for geometry not visible. If you send it drawing commands it will process them. Unlike QGraphicsScene, which maintains scene data, OpenGL will carry out whatever drawing operation you ask it to do. Even if the final result may be invisible. Only in the very last processing steps (clipping, early fragment rejection) invisible fragments are discarded.

Related

How expensive are QPainter::save() and QPainter::restore()?

I want to build a scene graph to store and manage my scene layout which will be painted using QPainter (like QPicture, but the layout should be modifiable).
The scene graph will contain nodes for transformations, clipping and primitives. The first two will need to store the current state of the painter to restore it afterwards. It seems natural to use QPainter::save() and QPainter::restore() respectively.
I am a bit concerned about the efficiency of these two functions. Qt's documentation gives no information here. Looking at Qt's source code, it seems
QPainter::save() copies every element of the state, e.g. the pen, the brush, the transformation, the clipping path, and many many more. It seems to me that storing the former state of the one or two relevant properties that I actually need by myself is far more efficient. Has anyone any experience with this?

Texture taken from Item: can I make its filtering be gamma-correct?

If this was a texture that I created, I'd simply make its internalFormat be GL_SRGB. But I'm passing a Qt Quick Item foo into my custom QQuickFramebufferObject GL code, where I take foo->textureProvider()->texture() and use that texture to render.
So can I make the filtering of the texture (when bilinearly sampling it) be gamma-correct?
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
I've searched google for hooks Qt may provide to configure this, but found nothing except QQuickTextureFactory which, however, does not solve my problem, at least AFAICS.
I need to support OpenGL ES 2.0.
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Well, from the filtered result color, there is simply no way to get back the original colors used as input, even if you know the interpolation factors.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
A more efficient variation of this strategy would be creating a second view onto the texture data, with an SRGB format (see GL_ARB_texture_view extension, core since GL 4.3), which completely avoids the copy and additional RAM usage.

QGLWidget is slower than QWidget

The problem mainly is determined in the title. I tried out the Qt's example (2dpainting) and noticed, that the same code consumes more CPU power if I try to draw on QGLWidget and less if I try to draw simply on QWidget. I thought that the QGLWidget should be faster. And one more interesting phenomenon: In QGLWidget the antialiasing hint seems to be ignored.
OpenGL version: 3.3.0
So why is that?
Firstly, note this text at the bottom of the documentation that you link to:
The example shows the same painting operations performed at the same
time in a Widget and a GLWidget. The quality and speed of rendering in
the GLWidget depends on the level of support for multisampling and
hardware acceleration that your system's OpenGL driver provides. If
support for either of these is lacking, the driver may fall back on a
software renderer that may trade quality for speed.
Putting that aside, hardware rendering is not always guaranteed to be faster than software rendering; it all depends upon what the renderer is being asked to do.
An example of where software can exceed hardware is if the goal of the item being rendered is constantly changing. So, if you have a drawing program that draws a line being created by the mouse being constantly moved and it is implemented by adding points to a painter path that is drawn every frame, a hardware renderer will be subject to constant pipeline stalls as new points are added to the painter path. Setting up the graphics pipeline from a stall takes time, which is not something a software renderer has to deal with.
In the 2dPainting example you ask about, the helper class, which performs the paint calls, is doing a lot of unnecessary work; saving the painter state; setting the pen / brush; rotating the painter; restoring the brush. All of this is a bigger overhead in hardware than software. To really see hardware rendering outperform software, pre-calculating the objects' positions outside of the render loop (paint function) and then doing nothing put actually rendering in the paint function is likely to display a noticeable difference here.
Finally, regarding anti-aliasing, the documentation that you linked to states: "the QGLWidget will also use anti-aliasing if the required extensions are supported by your system's OpenGL driver"

Draw a tree with Qt

I need to draw a tree with Qt,
I was thinking using QGraphicsScene and QGraphicsItem for the nodes. But as I want the nodes to be movable, so how do it the best way for the lines between the node ?
Any suggestions ?
Thx.
I would implement arcs as items as well, QGraphicsLine item in particular. The line could go between the centers of connected nodes.
Keep a reference to incident edges in the node item, and during node dragging update line nodes with:
edge->setLine(QLineF(node_center.x, node_center.y);
I suggest you use QML for drawing those kind of things (I hate QML language, but unfortunately it is the future in Qt for drawing high performance graphics, they are working hard on that and Qt5 will also be more QML-centric I guess). For drawing lines you can use rotated thin rectangles. See Rectangle.

Qt - multiple layers containing a QGraphicsView in one 3D scene (QGLWidget)

I'm currently evaluating the possibilities to implement a navigable 3D scene which allows to render multiple 2D layers. To be a bit more precise, I would like to display multiple graphs in a 3D environment in order to pinpoint simularities and differences between those graphs. Considering the following screenshot, there would be two graphs (one black, one grey), which are equivalent - for different graphs, deviant nodes might, i.e, be highlighted in red.
I am working with Qt's Graphic View Framework and established an editable graph editor using QGraphicsScene and several QGraphicsItems, which I separately from this project.
Qt provides OpenGL support, e.g., the QGLWidget and I had a look at the provided examples. Given, that I have not worked with OpenGL (I did some work with Java3D though) I would love if some people can share their experience.
Several solutions came to my mind:
Render every QGraphicsView to a QPixmap and display them in 3D, which would make the graphs navigatable but would prohibit any picking of elements etc.
Create an equivalent 3D element for every 2D graph element and "transform" every QGraphicsView into an 3D representation. I guess this would be quite some work (espacially as I have not worked with OpenGL)
Maybe there is an easy way to "place" the QGraphicScenes, the view, or just the QGraphicsItems in a QGLWidget without many adaptions and still register the usual "mouseclickevents" etc.
For a first implementation a plain navigable "viewer" which displayes multiple graphs in different layers would sufficient. But I would like to keep it extendable in order to add, e.g., picking, in the future.
The Qt3D project provides a class called QGraphicsEmbedScene which does exactly what you are asking for.

Resources