Greetings all,
Does simply subclassing QGLWidget and reimplementing paintEvent() make use of OpenGL and hardware acceleration?
I create a QPainter and draw QImages in this paintEvent().
What happen inside the paintEvent() method of QGLWidget? Does it convert the images(QImage,QPixmap) into OpenGL textures?
Does it use hardware acceleration for image scaling?
Thanks in advance,
umanga
Take a look at http://doc.qt.io/archives/4.6/opengl-2dpainting.html for an instructive example, where you can also find the following quote: "it is possible to re-implement its [QGLWidget] paintEvent() and use QPainter to draw on the device, just as you would with a QWidget. The only difference is that the painting operations will be accelerated in hardware if it is supported by your system's OpenGL drivers."
So, the answer to your first question is yes.
For figuring out the exact details of the implementation, let's take a quick peek at a piece of source-code from QOpenGLPaintEngine (which can be found by searching the internet) :
void QOpenGLPaintEngine::drawImage(const QRectF &r, const QImage &image,
const QRectF &sr, Qt::ImageConversionFlags)
{
Q_D(QOpenGLPaintEngine);
if (d->composition_mode > QPainter::CompositionMode_Plus
|| d->high_quality_antialiasing && !d->isFastRect(r))
d->drawImageAsPath(r, image, sr);
else {
GLenum target = (QGLExtensions::glExtensions
& QGLExtensions::TextureRectangle)
? GL_TEXTURE_RECTANGLE_NV
: GL_TEXTURE_2D;
if (r.size() != image.size())
target = GL_TEXTURE_2D;
d->flushDrawQueue();
d->drawable.bindTexture(image, target);
drawTextureRect(image.width(), image.height(), r, sr, target);
}
}
This answers your question regarding QImages, they are indeed drawn using textures.
Yes, if you use GL commands inside a QGLWidget, inside the paintGL, resizeGL and initializeGL methods, you will get full hardware acceleration (if available).
Also seems that using QPainter in a QGLWidget also gets HW acceleration, since there's a OpenGL QPainEngine implementation, you can read about that here.
Related
We are migrating from Old OpenGl to Modern OpenGL. I am trying to port two functions which uses QT/OpenGL and want to convert to Modern OpenGL. QImage content should be converted to OpenGL Format. Then I want to read the pixels of QImage and render in OpenGL. How to do this in Modern OpenGL. I know glcopypixels() / glDrawPixels() is deprecated. Any pointers? I have the following code but it is in old OpenGL. Basically the whole idea is writing to back buffer and restoring the back buffer and render pixels to avoid redraw. I am using QOpenglWidget Class given by QT Framework (QT 5.1). I have tried many things converting to OpenGL format from QImage. But it did not work. Need your help. Thanks in Advance.
QImage _savedBackBuffer;
void SaveBackBuffer()
{
glReadBuffer(GL_BACK);
QImage buf = this->grabFramebuffer();
_savedBackBuffer = convertToGLFormat(buf); // convertToGLFormat is not available in
QOpenGLWidget class
}
void restoreBackBuffer()
{
glDrawBuffer(GL_BACK);
**glDrawPixels**( _savedBackBuffer.width(), _savedBackBuffer.height(),
GL_RGBA, GL_UNSIGNED_BYTE, _savedBackBuffer.bits() ); ---> glDrawPixels is Deprecated. How to handle this call.
}
flush () {
glReadBuffer (GL_BACK);
glDrawBuffer(GL_FRONT);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
**glCopyPixels**(0, 0, _scrWidth, _scrHeight, GL_COLOR); // glCopyPixels Deprecated
...
glflush();
}
I have added below code to grab the Framebuffer. But still I am getting an Empty QImage. Anything wrong with my code.
saveBackBuffer()
{
_bSavingBackBuffer = true;
QString fileName("C:\\Users\\ey617e\\Desktop\\yourFile.png");
QFile file(fileName);
file.open(QIODevice::WriteOnly);
glReadBuffer(GL_BACK);
makeCurrent();
QOpenGLFramebufferObjectFormat format;
format.setAttachment(QOpenGLFramebufferObject::CombinedDepthStencil);
QOpenGLFramebufferObject * fbo = new
QOpenGLFramebufferObject(_scrWidth, _scrHeight, format);
fbo->bind();
paintGL();
_savedBackBuffer = fbo->toImage();
_savedBackBuffer.save(file.fileName(), "PNG");
fbo->release();
}
paintGL()
{
QOpenGLPaintDevice fboPaintDev(_scrWidth, _scrHeight);
QPainter painter(&fboPaintDev);
painter.setRenderHints(QPainter::Antialiasing | QPainter::TextAntialiasing);
painter.beginNativePainting();
drawDisplayLists(_underIllusDisplayLists);
drawDisplayLists(_illusDisplayLists);
painter.endNativePainting();
painter.drawText(20, 40, "Foo");
painter.end();
}
You can create a QOpenGLTexture object directly from a QImage: https://doc.qt.io/qt-5/qopengltexture.html#QOpenGLTexture-1
You can then use that texture directly for any image related OpenGL operations.
Basically the whole idea is writing to back buffer and restoring the back buffer and render pixels to avoid redraw.
Don't do that! It will actually impair performance, since drawing on top of previously rendered content introduces implicit synchronization points, thereby eliminating options to render new contents in parallel to advancing the presentation swap chain.
As "counterintuitive" as it may sound, just redraw the whole thing, each and every frame. If your codebase is that old, then the complexity of what you're drawing very likely is going to be so low, that you could easily render thousands of frames per second.
On the other hand retaining the contents of the backbuffer constitutes a cache and thus introduces the complexity of deciding upon cache invalidation.
I bet, that just redrawing using modern methods (geometry in buffer objects, index buffers, untangling of sync points) and simplifying the rendering code path by mere elimination the code that's responsible for determining when to actually redraw portions of the picture will actually vastly outperform anything what you had before.
I am recently building a Qt application with native openGL streaming data in the Qt window.
I have 2 widgets inherited from QOpenGLWidget, one has parent, the other doesn't have a parent. They both work well individually (just show() one widget per time). However, when I try to render them simultaneously, one of the texture I bind via glBindTexture() appears in the wrong window. It's like they are using the same context(). But by inheriting from QOpenGLWidget, they should have two different context.
In my code, I just override initializeGL, paintGL and resizeGL as usual
void initializeGL(){
initializeOpenGLFunctions();
// generate buffer, allocation, shaders...
}
void paintGL(){
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// bind vao, bind texture, drawElements...
}
Basically, my second window(window2) is a "video player" plays the image sequence from memory. But it appears on the window1. I have also set a QSurfaceFormat by:
QSurfaceFormat format;
format.setRenderableType(QSurfaceFormat::OpenGL);
format.setVersion(3,3);
format.setProfile(QSurfaceFormat::CoreProfile);
setFormat(format);
in the constructor.
Could someone tell me what might be wrong here? I think the context() the two windows use are different, then how could I glBindTexture in window2 could apply for window1? If you found these information are not enough, please tell me, Thanks.
platform: Ubuntu16.04, Qt5.6.2, OpenGL3.3
Update:
I have the same issue with this post: OpenGL multiple window rendering. However, mine is inside Qt5 environment, theoretically, it should works.
I have a minimal application which uses QOpenGLWidget that integrates an OpenGL wrapper library (OpenSceneGraph). I am trying to figure out how to correctly use the Qt5.6 support for high DPI screens when dealing with OpenGL content like I use.
My main() function has the following code:
int main(int argc, char** argv)
{
// DPI support is on
QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
QApplication app(argc, argv);
QMainWindow window;
// QOpenGLWidget with OpenSceneGraph content
QtOSGWidget* widget = new QtOSGWidget();
window.setCentralWidget(widget);
window.show();
return app.exec();
}
The QtOSGWidget is derived from QOpenGLWidget with OpenSceneGraph content: I use osgViewer::GraphicsWindowEmbedded to render my simple scene.
To merge OSG with Qt, I re-define the *GL() methods: paintGL(), resizeGL() and initializeGL(). I follow the Qt docs on what each of the *GL() methods should contain, i.e.:
paintGL() makes sure the viewer is updated
resizeGL() makes sure the graphics window is resized properly (together with camera and viewport);
initializeGL() makes sure OpenGL state is initialized.
I also re-defined Qt mouse events so that to pass the events to OSG
When I run my example on normal resolution screen, or with QApplication::setAttribute(Qt::AA_DisableHighDpiScaling);, the scene looks like it should:
Also, when I manipulate the camera view, the mouse coordinates are captured correctly.
However, when I set the high DPI option on, this is what I get:
The mouse coordinates for events are scaled as well and not passed to the OpenSceneGraph's event handler correctly.
As you can see, the graphics window size is not scaled by Qt. It is probably because of the way how I set up the sizing:
virtual void resizeGL( int width, int height )
{
// resize event is passed to OSG
this->getEventQueue()->windowResize(this->x(), this->y(), width, height);
// graphics window resize
m_graphicsWindow->resized(this->x(), this->y(), width, height);
// camera viewport
osg::Camera* camera = m_viewer->getCamera();
camera->setViewport(0, 0, this->width(), this->height());
}
That sizing is not scaled by Qt. Same thing happens to the mouse events coordinates.
My question: is there a way to know to what size the scaling will be performed so that to do resizeGL() correctly? Or what is the correct way to deal with the problem?
Update/Solution using scaling by hand: thanks to the answer of #AlexanderVX, I figured out the scaling solution. At first, I need to know some reference values of DPI in X and Y dimensions. Then I calculate the scaling coordinates based on that and pass them to my widget QtOSGWidget. So, the code of the main() has to contain:
QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
QApplication app(argc, argv);
int x = QApplication::desktop()->physicalDpiX();
int y = QApplication::desktop()->physicalDpiY();
// values 284 and 285 are the reference values
double scaleX = 284.0/double(x);
double scaleY = 285.0/double(y);
QMainWindow window;
QtOSGWidget* widget = new QtOSGWidget(scaleX, scaleY, &window);
// etc.
Then, whenever I refer to the sizing functions that needed to be passed to OpenSceneGraph (OpenGL) content, I have to do scaling, e.g.:
// resizeGL example
this->getEventQueue()->windowResize(this->x()*m_scaleX, this->y() * m_scaleY, width*m_scaleX, height*m_scaleY);
// mouse event example
this->getEventQueue()->mouseButtonPress(event->x()*m_scaleX, event->y()*m_scaleY, button);
Final update: since the target platform of my application is Windows 7-10, it makes much more sense to stick with the proposed answer of #AlexanderV (second part), i.e., to use SetProcessDPIAware() function.
Is there a way to know to what size the scaling will be performed so
that to do resizeGL() correctly?
First, detect the monitor:
// relative to widget
int screenNum = QApplication::desktop()->screenNumber(pWidget);
or maybe
// relative to global screen position
int screenNum = QApplication::desktop()->screenNumber(pWidget->topLeft());
and that gives us pointer to QScreen:
QScreen* pScreen = QApplication::desktop()->screen(screenNum);
from which you can read many screen characteristics, including "physical dot per inch" which makes us able to judge how many pixels there per inch:
qreal pxPerInch = pScreen->physicalDotsPerInch();
Having pixels per inch you will be able to programmatically scale your drawing code. Detect how much is 'normal' density and then scale proportionally against the density detected on physical device. Of course that approach is more suitable for accurate graphics. Be aware of both physicalDotPerInch() and devicePixelRatio(), though.
qreal scaleFactor = pScreen->physicalDotsPerInch() / normalPxPerInch;
Or what is the correct way to deal with the problem?
However, with widgets and normal GUI drawing it is often easier to let Qt / system to scale the entire UI. Qt Documentation: High DPI Displays.
If the OS Windows at least Vista or higher and tuning Qt for high DPI sounds complicated then there is a shortcut that I take and it helps me, though Qt complains in the log: "SetProcessDpiAwareness failed: "COM error 0xffffffff80070005 (Unknown error 0x0ffffffff80070005)"
". I call this function from main() before the event loop: SetProcessDPIAware() and then all the UI looks alike no matter what monitor density is. I use it with Qt 5.5, though. There is also SetProcessDpiAwareness() function, explore. I use SetProcessDPIAware because it is available since Windows Vista but SetProcessDpiAwareness is only available since Windows 8.1. So, the decision may depend on potential clients systems.
A 'shortcut' approach:
int main(int argc, char** argv)
{
// DPI support is on
// QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
// on Windows?
::SetProcessDPIAware();
// MSDN suggests not to use SetProcessDPIAware() as it is obsolete and may not be available.
// But it works with widgets.
QApplication app(argc, argv);
QMainWindow window;
// QOpenGLWidget with OpenSceneGraph content
QtOSGWidget* widget = new QtOSGWidget();
window.setCentralWidget(widget);
window.show();
return app.exec();
}
I'd like to display some QImage through QGraphicsScene, my code's very straightforward:
mainwindow.h
QImage *sourceImage;
QGraphicsView *imageView;
QGraphicsScene *imageScene;
mainwindow.cpp
imageScene = new QGraphicsScene;
imageView = new QGraphicsView;
imageView->setScene(imageScene);
sourceImage = new QImage;
sourceImage.load(":/targetimage.png");
imageScene.addPixmap(QPixmap::fromImage(sourceImage));
And then the complier points out exactly what I did wrong: QGraphicsScene::addPixmap accepts only const QPixmap as argument, and I was trying to convert QImage to const QPixmap, which is not allowed because QPixmap::fromImage within only accept const QImage, like a const hell.
The official documentation on this method doesn't make much sense to me either, if I'd like to make for example, an image viewer, and during runtime I'd sure load different images into QImage sourceImage, and how can I accomplish that using a const QImage?
This problem has been agonizing, thanks for any advice. Moreover could you light me a bit if there's any vision on the philosophical reason why guys in Qt make these methods const?
Try
imageScene.addPixmap(QPixmap::fromImage(*sourceImage));
Some advice:
there is no need to allocate the QImage on the heap (using new).
Use:
QImage sourceImage;
Then you do not need to dereference the pointer when calling QPixmap::fromImage
Just to clarify: the constness has nothing to do with the error.
I want to implment push_group/pop_group of cairo with QPainter, but QPainter resets all its state while begin() with a new painterDevice, so I have to save/revert all state manually.
Yes, just check out QPainter::save() and QPainter::restore().
If you want to save/restore between the lifespan of multiple QPainters, you have to do it manually. You could just create a class PainterState that encapsulates the painter state (pen, brush, transform, etc.), and then store a QStack<PainterState>.
There is a QPainterState class, but it is for internal use only, and I think it's only for use with a single QPainter. See the source ("qpainter_p.h") if you're interested in the QPainterState members (too many to copy here).
When constructing the QPainter object, you can draw it to a QPicture. Then it can be reloaded when needed and painted out to the real QPaintDevice.
QPicture picture;
QPainter painterQueued;
painterQueued.begin(&picture); // paint in picture
painterQueued.drawEllipse(10,20, 80,70); // draw an ellipse
painterQueued.end(); // painting done
QImage myImage;
QPainter painterTarget;
painterTarget.begin(&myImage); // paint in myImage
painterTarget.drawPicture(0, 0, picture); // draw the picture at (0,0)
painterTarget.end(); // painting done
You could queue up many QPicture objects in a list, stack, etc, and replay them when needed.