glShadeModel in Qt3D - qt

When using OpenGL there is function glShadeModel using which you can change normals calculation. It seems that when you using Qt3D default glShadeModel is GL_SMOOTH. Is it possible to set glShadeModel to GL_FLAT using QML Qt3D?
There is theme (Qt3d + glShadeModel) about this question, but it seems that it obsolete.

Qt3D is built around a programmable pipeline, so there's no such thing as a "shade model". You must supply a Material that does flat shading.
I'm not sure if there's one provided out of the box, but you can easily write your own.
If you're using a decent version of GLSL it's just a matter of propagate outputs from the vertex shader to inputs the fragment shader and mark them as flat. flat in GLSL means "disable interpolation of this value across the primitive; instead, use the value of the provoking vertex across all the fragments rasterized from that primitive".
If instead you want to support older versions of GLSL there's no way to disable such interpolation, so you must duplicate vertex data for all the primitives, and give each copy of vertex data for a given primitive the same value (say, on a "color" attribute).

Related

Texture taken from Item: can I make its filtering be gamma-correct?

If this was a texture that I created, I'd simply make its internalFormat be GL_SRGB. But I'm passing a Qt Quick Item foo into my custom QQuickFramebufferObject GL code, where I take foo->textureProvider()->texture() and use that texture to render.
So can I make the filtering of the texture (when bilinearly sampling it) be gamma-correct?
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
I've searched google for hooks Qt may provide to configure this, but found nothing except QQuickTextureFactory which, however, does not solve my problem, at least AFAICS.
I need to support OpenGL ES 2.0.
Note: I'm aware I could implement manual bilinear filtering with 4 texture taps and lerping, but that would hurt performance somewhat, so I'm looking for a better way.
Well, from the filtered result color, there is simply no way to get back the original colors used as input, even if you know the interpolation factors.
Or I could blit from the Qt Quick texture into a GL_SRGB texture of my own, then use that texture, but that's more complex and would need to happen every time the source texture is updated, hurting performance (and RAM usage).
A more efficient variation of this strategy would be creating a second view onto the texture data, with an SRGB format (see GL_ARB_texture_view extension, core since GL 4.3), which completely avoids the copy and additional RAM usage.

Is it possible to create WebGL programs with only vertex shaders?

I'm getting a "missing shader" error when I try to link a WebGL2 shader program with only vertex shaders attached. I'm trying to use Transform Feedback, and I thought that since the output of the vertex shader is written out, there should not be a need for a fragment shader.
From this blog post: link it seems that you should be able to do this. Is there something special about WebGL that I'm missing?
WebGL 2 is based on OpenGL ES 3.0 which per specification requires vertex and fragment shaders to be present on program objects:
Linking can fail for a variety of reasons as specified in the OpenGL
ES Shading Language Specification, as well as any of the following
reasons:
[...]
program does not contain both a vertex shader and a
fragment shader.
OpenGL ES 3.0 Specification, Page 49
You may attach a simple solid color or discarding fragment shader instead.

Vertex shader to create and animate geometry in QQuickItem

I'm reading through the comments on this Qt bug report:
https://bugreports.qt-project.org/browse/QTBUG-32741
That bug report suggests using a vertex shader with the QSGGeometry and animating in C++. I know the QML side supports shaders, but how do you access a shader from a QSGGeometry material that is handled via a C++ subclass of QQuickItem (as I interpret the bug report to suggest)? The vertex shaders accessed within QML are generally for deforming existing geometry, not really for creating new geometry or animating geometry, from what I can tell.
Or is the report suggesting to bypass QML completely for this task?
It would be pretty cool to pass data to a vertex shader for raw drawing and have the GL viewport be the Quick Item, but I don't think the QML shaders are designed for this.
In your subclass of QQuickItem override updatePaintNode() method should create (and update when needed) instance of QSGGeometryNode and set it up with a QSGGeometry configured for specific geometry type. That will allow you to directly control Vertex Buffer Object (just one, but with arbitrary layout of vertex attributes) and use your custom shaders.
See "Custom Geometry" example in qt documentation. Full project is in official repository.
Even more interesting example is "Texture in SGNode". It uses QQuickWindow::beforeRendering() signal to be able to run completely arbitrary OpenGL code. In this example custom rendering goes to Frame Buffer Object. Later this FBO is used as texture in a QSGSimpleTextureNode subclass.

Efficiency in drawing arbitrary meshes with OpenGL (Qt)

I am in the process of coding a level design tool in Qt with OpenGL (for a relevant example see Valve's Hammer, as Source games are what I'm primarily designing this for) and have currently written a few classes to represent 3D objects (vertices, edges, faces). I plan to implement an "object" class which ties the three together, keeps track of its own vertices, etc.
After having read up on rendering polygons on http://open.gl, I have a couple of questions regarding the most efficient way to render the content. Bear in mind that this is a level editor, so I am anticipating needing to render a large number of objects with arbitrary shapes and numbers of vertices/faces.
Edit: Updated to be less broad.
At what point would be the best point to create the VBO? The Qt OpenGL example creates a VBO when a viewport is initialized, but I'd expect it to be inefficient to create a close for each viewport.
Regarding the submitted answer, would it be a sensible idea to create one VBO for geometry, another for mesh models, etc? What happens if/when a VBO overflows?
VBOs should be re-/initialized whenever there's a need for it. Consider VBOs as memory pools. So you'd not allocate one VBO per object, but group similar objects into a single VBO. When you run out of space in one VBO you allocate a another one.
Today's GPUs are optimized for rendering indexed triangles. So GL_TRIANGLES will suffice in 90% of all cases.
Frankly modern OpenGL implementations completely ignore the buffer object access mode. So many programs did made ill use of that parameter, that it became more efficient to profile the usage pattern and adjust driver behavior toward that. However it's still a good idea to use the right mode. And in your case it's GL_STATIC_DRAW.

Qt - multiple layers containing a QGraphicsView in one 3D scene (QGLWidget)

I'm currently evaluating the possibilities to implement a navigable 3D scene which allows to render multiple 2D layers. To be a bit more precise, I would like to display multiple graphs in a 3D environment in order to pinpoint simularities and differences between those graphs. Considering the following screenshot, there would be two graphs (one black, one grey), which are equivalent - for different graphs, deviant nodes might, i.e, be highlighted in red.
I am working with Qt's Graphic View Framework and established an editable graph editor using QGraphicsScene and several QGraphicsItems, which I separately from this project.
Qt provides OpenGL support, e.g., the QGLWidget and I had a look at the provided examples. Given, that I have not worked with OpenGL (I did some work with Java3D though) I would love if some people can share their experience.
Several solutions came to my mind:
Render every QGraphicsView to a QPixmap and display them in 3D, which would make the graphs navigatable but would prohibit any picking of elements etc.
Create an equivalent 3D element for every 2D graph element and "transform" every QGraphicsView into an 3D representation. I guess this would be quite some work (espacially as I have not worked with OpenGL)
Maybe there is an easy way to "place" the QGraphicScenes, the view, or just the QGraphicsItems in a QGLWidget without many adaptions and still register the usual "mouseclickevents" etc.
For a first implementation a plain navigable "viewer" which displayes multiple graphs in different layers would sufficient. But I would like to keep it extendable in order to add, e.g., picking, in the future.
The Qt3D project provides a class called QGraphicsEmbedScene which does exactly what you are asking for.

Resources