Qt & double buffering - are there any neat tricks to capture pixels or manipulate the back buffer? - qt

I'm migrating an application to Qt from MFC.
The MFC app would use GDI calls to construct the window (a graph plot, basically). It would draw to a memory bitmap back buffer, and then BitBlt that to the screen. Qt, however, already does double buffering.
When the user clicks and drags in the graph, I'd like that section of the window to be inverted.
I'd like to find the best way to do this. Is there a way to do something like grabWindow() that will grab from the widget's back buffer, not the screen? ... maybe a BitBlt(..., DST_INVERT) equivalent?
I saw setCompositionMode() in QPainter, but the docs say that only works on painters operating on QImage. (Otherwise I could composite a solid rectangle image onto my widget with a fancy composition mode to get something like the invert effect)
I could do the same thing as MFC, painting to a QImage back buffer... but I read that hardware acceleration may not work this way. It seems like it'd be a waste to reimplement the double buffering already provided to you in Qt. I'm also not so sure what the side effects of turning off the widget's double-buffering may be (to avoid triple-buffering).
At one point, I had a convoluted QPixmap::grabWidget() call with recursion-preventing flags protecting it, but that rendered everything twice and is obviously worse than just drawing to a QImage. (and it's specifically warned against in the docs)
Should I give up and draw everything to a QImage doing it basically like I did in MFC?
EDIT:
Okay, a QPixmap painter runs at approximately the same speed as direct now. So, using a QPixmap back-buffer seems to be the best way to do this.
The solution was not obvious to me, but possibly if I looked at more examples (like Ariya's Monster demo) I would have just coded it the way it was expected to be done and it would have worked just fine.
Here's the difference. I saw help system demos using this:
QPainter painter(this)
in the start of paintEvent(). So, it seemed to naturally follow to me that to double buffer to a QPixmap then paint on the screen, you needed to do this:
QPainter painter(&pixmap);
QPainter painterWidget(this);
... draw using 'painter' ...
painterWidget.drawPixmap(QPoint(0,0), pixmap);
when in fact you are apparently supposed to do this:
QPainter painter;
painter.begin(&pixmap);
... draw using 'painter' ...
painter.end();
painter.begin(this);
painter.drawPixmap(QPoint(0,0), pixmap);
painter.end();
I can see that my way had two active painters at the same time. I'm not entirely sure why it's faster, but intuitively I like the latter one better. It's a single QPainter object, and it's only doing one thing at a time. Maybe someone can explain why the first method is bad? (In terms of broken assumptions in the Qt rendering engine)

Assuming you don't really want to pixel values from your offscreen buffer (but rather, just drawing something again on top of it and blit again to the screen), you should use QPixmap as the buffer, not QImage. Using the latter disables all painting acceleration as Qt falls back using its software raster engine, hence the use QPixmap. If you use OpenGL graphics system, you can still benefit from it.
For an example on how to do this, check my last code on running the Monster demo, it needs to have an offscreen pixmap to due the motion blur effect via repeated painting with source over composition mode.
To disable Qt's backing store (which is generally not a good idea), use the Qt::WA_PaintOnScreen for your top-level widget.
A bit unrelated, but you might want to have a look QRubberBand widget.

Drawing on top of the graph area you should be able to use composition modes to invert. Draw white using the Difference composition mode. The following example is a subclass of a QLabel showing a pixmap:
void Widget::paintEvent(QPaintEvent *pe)
{
// make sure we paint background
QLabel::paintEvent(pe);
// paint the overlay
if (!selectionRect.isNull()) {
QPainter p(this);
p.setCompositionMode(QPainter::CompositionMode_Difference);
p.fillRect(selectionRect,QColor("#FFFFFF"));
}
}
alt text http://chaos.troll.no/~hhartz/yesManInverted.png

The simplist, most straightforward answer I know of is to do it like you were doing before, to a QImage, and use the QImage as the source for your widget on the screen.
Another option might be to add a transparent widget over your graph, which only draws the inverted part of the graph. I don't think this will optimize the drawing at all, however. It will likely cause the underlying graph to be drawn, and then the overlay of the inverted part.

Related

Limitations of using GLFW and OpenGL for GUIs

I would like to know what kind of limitations can result from using GLFW and OpenGL instead of using a traditional GUI toolkit like Qt or GTK.
Of course, I know that GLFW with OpenGL don't expose the same level of functionality, but if only a few kind of widgets are needed, I think that those could be easily implemented.
The question is, is there some feature that couldn't be implemented on top of GLFW/OpenGL in contrast to Qt or GTK?
For example, I'm worried about drawing menus outside the window region (I guess that an auxiliary non-decorated window could be use in this case).
I know that GLFW with OpenGL don't expose the same level of functionality, but if only a few kind of widgets are needed
When it comes to OpenGL, there isn't any limit per se. You can draw wherever and whatever you want. The area where you can draw is a limiting factor from the operating system's side of things.
Remember that some "simple" functionality like say a textbox, is already complicated. Not only do you have to handle rendering (and scalable text isn't always fun), but you also have to handle keyboard events. Drawing the cursor and text selection, etc.
For example, I'm worried about drawing menus outside the window region (I guess that an auxiliary non-decorated window could be use in this case).
When it comes to drawing outside the window region, this isn't directly OpenGL related. It's more a question depending on the OS.
For instance using the WinAPI, you can draw anywhere on the screen simply by doing:
#include <Windows.h>
int main(int argc, char **argv)
{
HWND desktop = GetDesktopWindow();
HDC dc = GetDC(desktop);
RECT rect = { 20, 20, 200, 200 };
HBRUSH brush = CreateSolidBrush(RGB(0, 0, 255));
FillRect(dc, &rect, brush);
return 0;
}
Note that the rectangle will disappear immediately when the screen redraws that area.
When you already have a window, then you can use SetWindowRgn() to change the area which your application is allowed to draw within. Note that you can't just change this area, and everything will be fine and dandy.
The question is, is there some feature that couldn't be implemented on top of GLFW/OpenGL in contrast to Qt or GTK?
Bottom line is no. There's isn't any feature you can't implement with OpenGL that is in Qt and GTK. The point is that it isn't just OpenGL, and that a lot of it depends on the operating system, thus needing OS specific code.

Qt: The best procedure to create own QGraphicsScene/s

I'm making fractal creator software. So I need one scene per fractal and these scenes need to be "layered" because of zooming rubber band.
I've already tried to write it the "widget" way, so I had one custom widget called "canvas". In my canvas class I overrided paintEvent, and inside this event I rendered current fractal. Everytime when somebody clicked to menu with another fractal, I called update() method and new fractal was rendered. To zooming I used overriding of mouse events and update() of canvas. At the first time I repainted the whole canvas, but it was very very slow. After that I repainted only the part under the rubber band, but still slow when I'd like to select some bigger area and other problems with repainting.
So I've looked for another way to do it. Layers. I've found the QStackedWidget, but I didn't find way how to make visible both of my layers and the top one to be transparent. After that, I've found QGraphicsScene and this seems to be the best way to do it. But I don't know the correct procedure to do it. Below are two procedures I'm thinking about:
Create QGraphicsView
Instead of the widget, the canvas will be QGraphicsScene
I'll override some QGraphicsScene event (but I don't know which one - drawItems() is obsolete and override update() seems wrong to me, but maybe...)
When other fractal will be chosen, I'll repaint canvas by calling update() the same way as in my "widget" solution
In the foreground layer will be zooming rubber band
or:
Create QGraphicsView
Instead of the widget, the canvas will be QGraphicsScene
Every fractal will be the child of QGraphicsItem
When other fractal will be chosen, I'll remove the old one fractal
item and replace it by new one and probably call invalidate()
In the foreground layer will be zooming rubber band - I think, that
it's common behaviour of the QGraphicsScene isn't it?
Is one of my reasonings correct? Do you suggest anything else? Fractals are complicated in the calculations and It's very important to repaint only if it is necessary. Could you help me, please?
Thank you :-)
Edit: "zooming rubber band" explanation:
I'm sorry for my expression "zooming rubber band". It means scale (zoom) the area below the selection made by the rubber band - zooming the same way as in Photoshop CS5 (for example). And I'd like to know what part of the scene is repainted while selecting this way. If there is repainted whole scene, or the part of the scene below selected area, or there is nothing repainted and rubber band selection is done in separate layer.
I hope my explanation helped :-).
In Qt, a QGraphicsScene can be thought of as a world of items, with a QGraphicsView as a window into that world. Therefore, you should be adding items to the QGraphicsScene, based on QGraphicsItem (or QGraphicsObject if you want signals and slots).
In your situation, I'd create a Fractal class that inherits from QGraphicsItem and add that to the scene. Ensure to override the necessary pure virtual functions such as boundingRect and paint.
Do not calculate the fractal code in the paint function. I suggest the Fractal class stores a QPixmap (or QImage if you're drawing at the pixel level) and render the fractal to this. Then perodically, in the paint function, the Fractal class would render the contents of the QPixmap with a call to painter->drawImage or painter->drawPixmap; whichever is relevant in this case.
As for zooming, your Fractal class can then response to being scaled, appropriately changing the rendering on the internal representation.

QtQuick 2.0 scene on top of Direct3D scene

I have been trying to come up with a solution for having a QtQuick 2.0 scene together with a Direct3D scene for quite a while, but wasn’t very successful. My goal is to have a Direct3D engine running at reasonable speed (60 FPS?) together with QML UI on top. Both things run just fine at 150-200 FPS on their own. But when forced to cooperate together within one window, everything just goes bananas. I have investigated several approaches, but none of them seems to be sufficient enough:
Solution A: Rendering Direct3D scene into a texture, visualizing with QImage & QQuickPaintedItem
this solution works quite well and it seems to be the preferred one according to other people on the web. However it is TERRIBLY slow. I wasn’t able to have more than 18-20 FPS in full HD. The bottleneck was clearly in the texture transfer chain from GPU (D3D) to CPU (QImage) and back to GPU (QML renderer) each frame. Especially the CPU->GPU processing on the QML side was way too slow!
Solution B: Rendering QtQuick scene into a FBO, then using Direct3D texture
this is basically the previous solution other way round. The speed is a little bit better when the UI does not require an update. Once it starts animating, everything drops down to 18-20 FPS again. QOpenGLFramebufferObject::toImage() obviously takes its time. Implementing texture/FBO double buffering on both sides to reduce stalls does not really help.
Solution C: QQuickView with enabled transparency on top of QWidget with Direct3D scene
was not lucky with this approach either. It seems like the transparency works only when QQuickView is in its own window. Once I put it on top of my D3D QWidget within the same window, it immediately stopped working and became fully opaque. Someone was trying to do something similar there as well: http://qt-project.org/forums/viewthread/5484, but I had no luck with that solution at all. Maybe keeping two completely separated windows (main D3D window + frameless transparent QML window) on top of each other all time would do the trick, but that just sounds silly.
Solution X: Modify ANGLE library and try to extract & share D3D device context with my Direct3D renderer
haven’t tried this yet, avoiding any library modifications as long as possible. Would that even be a sensible option?
My obvious questions here are: Am I doing something wrong? What is the preferred solution? A, B, C, X or maybe something totally different? Can someone point me to the right direction?
TL;DR: What is the fastest way to render QML scene on top of Direct3D scene?
Sounds like you ideally want a bastard mix of Solution X and writing yourself a DirectX QPA plugin.
http://qt-project.org/wiki/Qt-Platform-Abstraction
I'd wager you'd make a lot of friends if you open sourced such an effort!!

How to obtain the frame buffer from within QWidget's paintEvent()

Is there any way to get the pixels that will be displayed on a QWidget, do some processing, and then display the processed pixels?
I can't seem to overcome the limitations of paintEvent(), hopefully someone can help.
QPixmap::grabWidget and QWidget::render will get me the pixels I need, but they cannot be called from within paintEvent(), since doing so will trigger an infinite loop.
I have tried running a timer, taking a snapshot, doing my processing, forcing a repaint, and displaying the saved image. This works to some extent, but on dynamic content (i.e. moving) it fails miserably.
I need to be able to do this from within paintEvent().
Is there any way to do this?
It sounds like your problem would be best solved by rendering the widget to a pixmap (within the paint event), doing your processing on the pixmap, then rendering the result to the widget afterwards:
void MySuperAwesomeWidget::paintEvent(QPaintEvent* event)
{
QPixmap pixmap(size());
QPainter painter;
painter.begin(&pixmap);
// Drawing code goes here
painter.end();
// Do processing on pixmap here
painter.begin(this);
painter.drawPixmap(0, 0, pixmap);
painter.end();
}
Normally, the technique I've described would be considered unnecessary (or even undesirable) because it is essentially a form of double-buffering and QWidget already provides double-buffering behind the scenes. However, in your case you are doing some processing on the drawing before performing a final rendering. As such, this is probably the best approach.

How can I hunt down these OpenGL calls that are distorting objects in my scene?

I'm mixing two libraries that use OpenGL: Qt and OpenSceneGraph. I'm targeting OpenGL ES 2, so everything is done with shaders and ES 2 compatible calls.
I'm specifically using OSG with QtDeclarative by trying to paint OSG onto a QDeclarativeItem. I do this the way suggested in Qt documentation: wrap all OpenGL calls between beginNativePainting()/endNativePainting().
This works fine until I use textures in my OpenSceneGraph scene. When I do this, my QML window gets "messed up" for lack of a better word. To keep it as simple as possible, my OSG scene consists of a plane with a texture applied to it. I recreated the scene using basic OpenGL calls and the problem no longer occurs. Here's the problem summarized as a bunch of pictures:
The QtDeclarative engine uses OpenGL to paint stuff. I set up a simple QML page:
I create a simple scene using OpenGL directly. It's a plane with a texture painted onto it.
Now I try to set up the same scene in OSG... identical shaders, etc.
You can see something odd is going on with the last screenshot. Don't worry about the black background where the original OpenGL scene was transparent, that's just OSG using a black clear color. The problem is that the other items set up with QML (the rectangles) get messed up.
Edit: To clarify what happens: The rectangles I draw with QML are all stretched out to the right edge of the screen. I also noticed if I draw rectangles after the OpenSceneGraph item in QML, they don't show up (I didn't notice this before). I draw the purpley black rectangle after the OSG item in the following screenshots... note that it disappears. There might be more weird stuff happening, but this is all I've observed playing with rectangles.
Before
After
I'm fairly new to OpenGL so I don't know what kind of call/state setting would cause something like this to happen. I think that OpenSceneGraph makes some OpenGL state change that's messing up Qt's paint engine. I also know that this only occurs when OSG uses textures... if I don't apply textures in my OSG scene, this doesn't happen. This is where I'm stuck.
Also, I tried to use BuGLe to get an OpenGL call trace with and without textures enabled in OSG to see if I could figure out the problematic state change(s). I found a few differences, and even some global state that OSG changed (such as glPixelStorei()) between the two, but resetting the changes I found made no difference. It would help a lot if I knew what to look for. If anyone's feeling insane, I also have the stack traces:
OSG with texturing: http://pastie.org/4223182 (osg texture stuff is lines 637~650)
OSG without texturing: http://pastie.org/4223197
Edit 2:
Here's a diff that might be helpful. You'll need to scroll way down before the relevant lines are apparent.
http://www.mergely.com/nUEePufa/
Edit 3:
Woah! Okay, that diff helped me out quite a bit. OSG enables VertexAttribArray 3 but doesn't disable it. Calling glDisableVertexAttribArray(3) after OSG renders its frame seems to partially solve the problem; there's no more stretching of the QML rectangles. However, rectangles drawn after the OSG item still don't show up.
After obsessing over the trace logs, I think I've found two OpenGL things that need to be reset before passing control back to Qt to cause the issues above to go away. I mentioned one in an edit... I'll summarize both in this answer.
Rectangle/QML Item distortion
QPainter uses Vertex Attributes 3, 4, and 5 directly for something that looks like its related to the geometry of those rectangles. This can be seen in the trace:
[INFO] trace.call: glVertexAttrib3fv(3, 0x2d94a14 -> { 0.00195312, 0, 0 })
[INFO] trace.call: glVertexAttrib3fv(4, 0x2d94a20 -> { 0, -0.00333333, 0 })
[INFO] trace.call: glVertexAttrib3fv(5, 0x2d94a2c -> { 0.2, 0.4, 1 })
Disabling the corresponding vertex attribute arrays fixes the stretchy rectangles issue:
glDisableVertexAttribArray(3);
glDisableVertexAttribArray(4);
glDisableVertexAttribArray(5);
Items drawn after the OSG Item don't render
In retrospect, this was one was easy and didn't have anything to do with texturing. I hadn't noticed this before trying to add textures to my scene though, so mixing the two issues was my fault. I also screwed up with the traces and diff I posted; I never updated them to account for the ordering problem after I discovered it (sorry!)
Anyways, QPainter expects depth testing to be turned off. Qt will turn depth testing off when you call beginNativePainting(), and also when it starts to paint its items... but you're expected to turn it back off whenever handing control back:
QPainter paints stuff (DEPTH_TEST = off)
OSG draws stuff (DEPTH_TEST = on)
QPainter paints more stuff [expects DEPTH_TEST = off]
The right trace logs showed that I wasn't doing this... So the fix is
glDisable(GL_DEPTH_TEST)
Maybe you just need to reenable GL_TEXTURE_2D? I notice in your example with textures that OSG enables, and subsequently disables GL_TEXTURE_2D. Thus the difference between your two cases (with texture vs without), is that the one that uses textures finishes with texturing disabled, while the one without texturing leaves GL_TEXTURE_2D in it's initial state.
If Qt needs/expects texturing enabled to draw quads it could cause nothing to show up.

Resources