I want to build a scene graph to store and manage my scene layout which will be painted using QPainter (like QPicture, but the layout should be modifiable).
The scene graph will contain nodes for transformations, clipping and primitives. The first two will need to store the current state of the painter to restore it afterwards. It seems natural to use QPainter::save() and QPainter::restore() respectively.
I am a bit concerned about the efficiency of these two functions. Qt's documentation gives no information here. Looking at Qt's source code, it seems
QPainter::save() copies every element of the state, e.g. the pen, the brush, the transformation, the clipping path, and many many more. It seems to me that storing the former state of the one or two relevant properties that I actually need by myself is far more efficient. Has anyone any experience with this?
Related
I've been looking for a robust method of pathfinding for a platformer based game I'm developing and A* looks like it's the best method available. I noticed there is a demo for the AStar implementation in Godot. However, it is written for a grid/tile based game and I'm having trouble adapting that to a platformer where the Y axis is limited by gravity.
I found a really good answer that describes how A* can be applied to platformers in Unity. My question is... Is it possible to use AStar in Godot to achieve the same thing described in the above answer? Is it possible this could be done better without using the built in AStar framework? What is a really simple example of how it would work (with or without AStar) in GDscript?
Though I have already posted a 100 point bounty (and it has expired), I would still be willing to post another 100 point bounty and award it, pending an answer to this question.
you could repurpose the Navigation2D node for platformer purposes. The picture below shows an example usage. The Navigation2D node makes it possible to navigate the shortest path between two point that lie within the combined navigation polygon (this is the union of all NavigationPolygonInstances).
You can use the get_simple_path method to get a vector2 array that describes the points your agent/character should try to reach (or get close to, by using some predefined margin) in sequence. Place each point in a queue, and move the character towards the different points by moving it horizontally. Whenever your agent's next point in the queue is too high up to reach, then you can make the agent jump.
I hope this makes sense!
The grey/dark-blue rectangles are platforms with collision whereas the green shapes are NavigationPolygonInstance nodes
This approach is by no means perfect. If you were to implement slopes into your game then the agent may jump up the slope instead of ascending it normally. It is also pretty tedious to create all the shapes needed.
A more robust solution would be to have a custom graph system that you could place in the scene and position its vertices. This opens up the possibility to make one-way paths and have certain edges/connections between vertices marked as "jumpable" only. This is a lot more work though if you can not find any such solution online.
If this was a texture that I created, I'd simply make its internalFormat be GL_SRGB. But I'm passing a Qt Quick Item foo into my custom QQuickFramebufferObject GL code, where I take foo->textureProvider()->texture() and use that texture to render.
So can I make the filtering of the texture (when bilinearly sampling it) be gamma-correct?
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
I've searched google for hooks Qt may provide to configure this, but found nothing except QQuickTextureFactory which, however, does not solve my problem, at least AFAICS.
I need to support OpenGL ES 2.0.
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Well, from the filtered result color, there is simply no way to get back the original colors used as input, even if you know the interpolation factors.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
A more efficient variation of this strategy would be creating a second view onto the texture data, with an SRGB format (see GL_ARB_texture_view extension, core since GL 4.3), which completely avoids the copy and additional RAM usage.
I am in the process of coding a level design tool in Qt with OpenGL (for a relevant example see Valve's Hammer, as Source games are what I'm primarily designing this for) and have currently written a few classes to represent 3D objects (vertices, edges, faces). I plan to implement an "object" class which ties the three together, keeps track of its own vertices, etc.
After having read up on rendering polygons on http://open.gl, I have a couple of questions regarding the most efficient way to render the content. Bear in mind that this is a level editor, so I am anticipating needing to render a large number of objects with arbitrary shapes and numbers of vertices/faces.
Edit: Updated to be less broad.
At what point would be the best point to create the VBO? The Qt OpenGL example creates a VBO when a viewport is initialized, but I'd expect it to be inefficient to create a close for each viewport.
Regarding the submitted answer, would it be a sensible idea to create one VBO for geometry, another for mesh models, etc? What happens if/when a VBO overflows?
VBOs should be re-/initialized whenever there's a need for it. Consider VBOs as memory pools. So you'd not allocate one VBO per object, but group similar objects into a single VBO. When you run out of space in one VBO you allocate a another one.
Today's GPUs are optimized for rendering indexed triangles. So GL_TRIANGLES will suffice in 90% of all cases.
Frankly modern OpenGL implementations completely ignore the buffer object access mode. So many programs did made ill use of that parameter, that it became more efficient to profile the usage pattern and adjust driver behavior toward that. However it's still a good idea to use the right mode. And in your case it's GL_STATIC_DRAW.
I have to display a tiled map to display the result of a simulation
One can zoom/unzoom on the map, so if the zoom is far, there will be much more tiles displayed.
I am using QGraphicsPixmapItem to add the tiles to a QGraphicsScene.
I wonder whether openGl would be able to speed things up
I am using QGraphicsPixmapItem to add the tiles to a QGraphicsScene.
QGraphicsScene already uses methods like spatial subdivision (Kd trees) to determine which parts of a scene are visible and which not. In addition QGraphicsScene can use OpenGL as a rendering backend.
I strongly suggest you stick with QGraphicsScene, you'll hardly get more efficient than this, especially considering your next question:
I wonder whether openGl would be able to speed things up
Not if used naively. OpenGL is not a scene graph. I can't cull away and not issue drawing commands for geometry not visible. If you send it drawing commands it will process them. Unlike QGraphicsScene, which maintains scene data, OpenGL will carry out whatever drawing operation you ask it to do. Even if the final result may be invisible. Only in the very last processing steps (clipping, early fragment rejection) invisible fragments are discarded.
I'm currently evaluating the possibilities to implement a navigable 3D scene which allows to render multiple 2D layers. To be a bit more precise, I would like to display multiple graphs in a 3D environment in order to pinpoint simularities and differences between those graphs. Considering the following screenshot, there would be two graphs (one black, one grey), which are equivalent - for different graphs, deviant nodes might, i.e, be highlighted in red.
I am working with Qt's Graphic View Framework and established an editable graph editor using QGraphicsScene and several QGraphicsItems, which I separately from this project.
Qt provides OpenGL support, e.g., the QGLWidget and I had a look at the provided examples. Given, that I have not worked with OpenGL (I did some work with Java3D though) I would love if some people can share their experience.
Several solutions came to my mind:
Render every QGraphicsView to a QPixmap and display them in 3D, which would make the graphs navigatable but would prohibit any picking of elements etc.
Create an equivalent 3D element for every 2D graph element and "transform" every QGraphicsView into an 3D representation. I guess this would be quite some work (espacially as I have not worked with OpenGL)
Maybe there is an easy way to "place" the QGraphicScenes, the view, or just the QGraphicsItems in a QGLWidget without many adaptions and still register the usual "mouseclickevents" etc.
For a first implementation a plain navigable "viewer" which displayes multiple graphs in different layers would sufficient. But I would like to keep it extendable in order to add, e.g., picking, in the future.
The Qt3D project provides a class called QGraphicsEmbedScene which does exactly what you are asking for.