I'm reading through the comments on this Qt bug report:
https://bugreports.qt-project.org/browse/QTBUG-32741
That bug report suggests using a vertex shader with the QSGGeometry and animating in C++. I know the QML side supports shaders, but how do you access a shader from a QSGGeometry material that is handled via a C++ subclass of QQuickItem (as I interpret the bug report to suggest)? The vertex shaders accessed within QML are generally for deforming existing geometry, not really for creating new geometry or animating geometry, from what I can tell.
Or is the report suggesting to bypass QML completely for this task?
It would be pretty cool to pass data to a vertex shader for raw drawing and have the GL viewport be the Quick Item, but I don't think the QML shaders are designed for this.
In your subclass of QQuickItem override updatePaintNode() method should create (and update when needed) instance of QSGGeometryNode and set it up with a QSGGeometry configured for specific geometry type. That will allow you to directly control Vertex Buffer Object (just one, but with arbitrary layout of vertex attributes) and use your custom shaders.
See "Custom Geometry" example in qt documentation. Full project is in official repository.
Even more interesting example is "Texture in SGNode". It uses QQuickWindow::beforeRendering() signal to be able to run completely arbitrary OpenGL code. In this example custom rendering goes to Frame Buffer Object. Later this FBO is used as texture in a QSGSimpleTextureNode subclass.
Related
It seems we have to use Model's geometry property, but the only subclass of Geometry seems to be GridGeometry.
Maybe it is not a (performance-)wise idea to handle this data in QML/JS?
When using OpenGL there is function glShadeModel using which you can change normals calculation. It seems that when you using Qt3D default glShadeModel is GL_SMOOTH. Is it possible to set glShadeModel to GL_FLAT using QML Qt3D?
There is theme (Qt3d + glShadeModel) about this question, but it seems that it obsolete.
Qt3D is built around a programmable pipeline, so there's no such thing as a "shade model". You must supply a Material that does flat shading.
I'm not sure if there's one provided out of the box, but you can easily write your own.
If you're using a decent version of GLSL it's just a matter of propagate outputs from the vertex shader to inputs the fragment shader and mark them as flat. flat in GLSL means "disable interpolation of this value across the primitive; instead, use the value of the provoking vertex across all the fragments rasterized from that primitive".
If instead you want to support older versions of GLSL there's no way to disable such interpolation, so you must duplicate vertex data for all the primitives, and give each copy of vertex data for a given primitive the same value (say, on a "color" attribute).
The top-level view in the application I am working on is not Qt-based. This view has its own APIs to draw lines, pixels, etc. I would like to take a rectangular portion of this view and attach it to QMainWindow instance. I guess there must be some mechanism within Qt that attaches a screen (Windows, x-windows, etc.) to QMainWindow. Can you please direct me to the abstract class that Qt uses for drawing to the actual surface? Regards.
If you're using Qt4 there's QX11EmbedWidget, which doesn't actually seem to exist in Qt5 and I can't find a good replacement. In terms of surface rendering everything is done as a QPaintDevice if it's subclassed from QWidget (which as far as I know every GUI element is).
The default raster backend draws on a QImage, so what you paint on with a QPainter in any widget is a QImage.
The backing store QImage shares the image bits with the underlying platform. On Windows, the QImage accesses a DIB section's data directly. On X11, the QImage accesses a shared memory XImage.
In all cases, assuming that your non-Qt code expects a bitmap to paint on, you can pass the data pointer from the QImage to the non-Qt code, within the paint event:
QImage * image = dynamic_cast<QImage*>(backingStore()->paintDevice());
The non-Qt code needs to properly interface to a large bitmap: it needs to accept a starting scan line to draw on, an X offset, and scanline length.
I want to use Qt 5.4 to create a window and render with normal OpenGL functions some stuff in that window. In the last few days, I read a lot about the Qt classes and how to initialize OpenGL and so on. I think, the main classes I have to deal with are QOpenGLWindow or QOpenGLWidget, but there are the QSurface and some other classes too. Now I am very unsure about what doing next and which class I should use to use the plain OpenGL functions, later. Can someone explain more clearly to me what I have to do to set up a Qt GUI in which I can use plain OpenGL?
Some other questions from me are:
At which point does Qt create a plain OpenGL context? Do I have to use the QOpenGLContext?
What is exactly the difference between a QSurface and a QOpenGLWindow? In the QOpenGLWindow example both classes are used.
Is it possible to use glew besides this qt stuff? Here are some question on, which deal with setting up glew with qt, but I think that I did not get the real point of why glew is needed.
Edit: I discussed this question with a colleague and our only conclusion was to use Offscreen-Rendering. Does anyone know another solution?
At which point does Qt create a plain OpenGL context? Do I have to use the QOpenGLContext?
Either where it's documented (for instance, creating a QOpenGLWidget or a QOpenGLWindow will automatically create a context), or you can create a context manually at any time by creating a QOpenGLContext object.
What is exactly the difference between a QSurface and a QOpenGLWindow? In the QOpenGLWindow example both classes are used.
A QSurface is a base class representing a "drawable surface" (onscreen or offscreen). QWindow is its onscreen implementation (representing a top level window), so it inherits from QSurface. You can draw over a QWindow by using OpenGL or by using a CPU-based rasterizer.
Finally, QOpenGLWindow is a QWindow subclass which offers some extra functionality and convenience, by automatically creating and managing an OpenGL context (via QOpenGLContext), having an optional partial-update strategy (through the usage of a FBO), etc.
Is it possible to use glew besides this qt stuff? Here are some question on, which deal with setting up glew with qt, but I think that I did not get the real point of why glew is needed.
Qt is not in your way. And it doesn't change your usage of OpenGL in any way. Just use Qt to create a window and a context (in a totally cross platform way), then you're free to use GLEW (to resolve OpenGL function pointers, extensions, etc.) or any 3rd party OpenGL abstraction.
I have to display a tiled map to display the result of a simulation
One can zoom/unzoom on the map, so if the zoom is far, there will be much more tiles displayed.
I am using QGraphicsPixmapItem to add the tiles to a QGraphicsScene.
I wonder whether openGl would be able to speed things up
I am using QGraphicsPixmapItem to add the tiles to a QGraphicsScene.
QGraphicsScene already uses methods like spatial subdivision (Kd trees) to determine which parts of a scene are visible and which not. In addition QGraphicsScene can use OpenGL as a rendering backend.
I strongly suggest you stick with QGraphicsScene, you'll hardly get more efficient than this, especially considering your next question:
I wonder whether openGl would be able to speed things up
Not if used naively. OpenGL is not a scene graph. I can't cull away and not issue drawing commands for geometry not visible. If you send it drawing commands it will process them. Unlike QGraphicsScene, which maintains scene data, OpenGL will carry out whatever drawing operation you ask it to do. Even if the final result may be invisible. Only in the very last processing steps (clipping, early fragment rejection) invisible fragments are discarded.