I want to Integrate multiple custom OpenGL rendering with Qt Quick via QQuickFramebufferObject, and I create three different QQuickFramebufferObject used to integrate the three custom OpenGL. But the result is that different QQuickFramebufferObject show the only one same custom OpenGL rendering.
You must take problem with binding buffer. Make sure you bind the corresponding one for each render.
Related
how do I put the time, date and company logo on the top of all my Qt forms?
I don't want to redo this same code in all my classes.
I thought I could create 1 class with a stacked widget for the date, time and logo and then call (add) this to all other classes.
I'm not sure how to do this.
In QML, you would generally put the common code in a base class and subclass it to create children that might have values to certain fields that are different - but the same common fields. I use this technique to generate different modes in the same app that might need slightly different toolbars but the same canvas.
You might want to write the ui code in c++ for widgets, and then go from there.
Another approach might be to set the common values inside a model, and use those common values across the board.
In JavaFX, once I have a Scene, Pane, and/or Canvas setup and have my Node graph setup, how do I add my own custom components? I've already added them to the Node graph, but they're not being rendered, because they neither inherit from a particular node nor implement the particular method necessary to have their rendering method called. There isn't much complexity involved in drawing these components -- it's about twenty calls of drawRectangle etc..
If I recall correctly, in Swing, I had each component implement a version of draw, and draw was called automatically as part of the framework. But I haven't found the equivalent mechanism in JavaFX yet.
JavaFX doesn't have "ondraw" in the usual sense, because components are usually composed and rendered on GPU.
There are multiple ways to create custom drawing, depending on your needs and requirements.
You can merely use Canvas for simple drawing, pretty clear described in official tutorial . This is the simplest way for complex drawings and probably it is what you are looking for. Add canvas node to the scene and draw on it. You can encapsulate the logic by extending Canvas or a container component that will contain Canvas (or by presenter etc. if you employ some kind of MVP/MVC).
Another way is just to compose from existing visual components e.g. shapes and images, for example by extending or preparing a Pane or other container and adding children components.
Yet another is to prepare a bitmap with custom drawing and use Image component, you can use Swing or other APIs to draw a bitmap in advance and use it for rendering. In general this is similar to using canvas but more complex, unless you see clear benefits or have particular reasons, canvas is preferred.
Last way is to implement custom scene Node with complete rendering, I would not go into detail and advice against it; it is relatively complex, will use non-public APIs, probably would not be compatible between JDK releases and is useful only for very special needs.
Note, if you are creating a custom reusable library component, you probably will need to dive into the topic of skinning and component lifecycle.
I needed to pass in the Pane that I'm using for drawing into the constructor of my "custom class." The custom class then adds the necessary shapes to the provided pane. I assume I'll also need to keep track of those shapes as a data member of the custom class and remove/replace them when the custom class needs a new visual representation.
See Fedor Losev's answer below for a more complete list of options. E.g., I could have used a Canvas instead of a Pane.
According to this article, there are two main methods for rendering raw OpenGL into an application whose UI is otherwise managed by QtQuick's scene graph. In short, they are (according to my understanding):
Calling raw OpenGL methods in hand-written code that is hooked into the scene graph's render loop through some APIs exposed by QtQuick.
Rendering the raw OpenGL portion of your scene to a QQuickFramebufferObject, which is treated like a component in the scene graph and itself rendered as if it were a texture.
What are the advantages/disadvantages of the two approaches?
The issue with QQuickWindow::beforeRendering() or QQuickWindow::afterRendering() signals is that all OpenGL drawing done from their slots will be appropriately under or over the rendered Qt Quick scene. If this is good enough for you — ie. you only want to draw a custom OpenGL background or some kind of overlay — then go for it.
If you need more, ie. use OpenGL to render some QtQuick Item that placed within the scene graph, then you have to go with the second option: rendering OpenGL to a framebufferobject that is used as a texture on some QtQuick Item.
As the documentation article you have linked to states, it gives you more possibilities (using multiple rendering contexts or even multiple rendering threads) but also comes with performance cost. It is also more troublesome to implement.
Generally, as the option 1) is usually inadequate, you are forced to go with 2). It is the only way to use raw OpenGL within a QtQuick scene that I know of.
I have been looking far and wide to find out how, if it’s possible, you can fill a particular area in a QML screen with an OpenGL context and do custom OpenGL only in that context. I’ve seen plenty of demos where the QML components like buttons, etc lay on top or below a screen-wide OpenGL context (as is typically required by games), but I’d like to be able to situate several distinct OpenGL contexts within QML and have the QML file define how large they are, where they are positioned, etc.
Now, since Qt 5 is all OpenGL under the hood, it makes me wonder if using a Canvas element with custom drawing via javascript could result in similar rendering performance as custom OpenGL? This would be a meaningful alternative but it’s not clear to me how the javascript drawing is handled via runtime compared to custom OpenGL drawing.
What is it that you want to draw? QQuickPaintedItem may be the simplest way to go about it. When you're using QOpenGLFramebufferObject as the target, the painter will use OpenGL to paint the texture. It might be easier than writing your own OpenGL code if all you're doing is 2D.
I'm trying to share create four QGLWidgets with the same GL3 context so I can share a VBO between them. I've been doing this for a while with just one widget, but it wasn't shared with the others. QGLWidget has a sharewith paremeter, which from what I understand automatically shares the contexts between them, but I'm not sure how compatible that is with JOGL.
I'm also confused about when the context is actually created. In some examples it says to create the context in initializeGL. I'm not sure if that means I have to update the first widget before I can create the secondary widgets (passing the sharewith paremeter the first created widget with a current context).
Can anyone provide me with a simple example to get this functioning? I just need to create four context-sharing GLWidgets that all run off a GL3 profile.
Although I'm not using JOGL, I am doing a similar thing here and here. The basic idea is that you create a hidden QGLWidget, make it current and compile all your shaders, then pass it as the shareWidget to your child viewports. Whenever you want to upload geometry
make the hidden QGLWidget current and do your glBufferData calls - the data becomes available to the other viewport contexts.