Qt5.6: high DPI support and OpenGL (OpenSceneGraph) - qt

I have a minimal application which uses QOpenGLWidget that integrates an OpenGL wrapper library (OpenSceneGraph). I am trying to figure out how to correctly use the Qt5.6 support for high DPI screens when dealing with OpenGL content like I use.
My main() function has the following code:
int main(int argc, char** argv)
{
// DPI support is on
QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
QApplication app(argc, argv);
QMainWindow window;
// QOpenGLWidget with OpenSceneGraph content
QtOSGWidget* widget = new QtOSGWidget();
window.setCentralWidget(widget);
window.show();
return app.exec();
}
The QtOSGWidget is derived from QOpenGLWidget with OpenSceneGraph content: I use osgViewer::GraphicsWindowEmbedded to render my simple scene.
To merge OSG with Qt, I re-define the *GL() methods: paintGL(), resizeGL() and initializeGL(). I follow the Qt docs on what each of the *GL() methods should contain, i.e.:
paintGL() makes sure the viewer is updated
resizeGL() makes sure the graphics window is resized properly (together with camera and viewport);
initializeGL() makes sure OpenGL state is initialized.
I also re-defined Qt mouse events so that to pass the events to OSG
When I run my example on normal resolution screen, or with QApplication::setAttribute(Qt::AA_DisableHighDpiScaling);, the scene looks like it should:
Also, when I manipulate the camera view, the mouse coordinates are captured correctly.
However, when I set the high DPI option on, this is what I get:
The mouse coordinates for events are scaled as well and not passed to the OpenSceneGraph's event handler correctly.
As you can see, the graphics window size is not scaled by Qt. It is probably because of the way how I set up the sizing:
virtual void resizeGL( int width, int height )
{
// resize event is passed to OSG
this->getEventQueue()->windowResize(this->x(), this->y(), width, height);
// graphics window resize
m_graphicsWindow->resized(this->x(), this->y(), width, height);
// camera viewport
osg::Camera* camera = m_viewer->getCamera();
camera->setViewport(0, 0, this->width(), this->height());
}
That sizing is not scaled by Qt. Same thing happens to the mouse events coordinates.
My question: is there a way to know to what size the scaling will be performed so that to do resizeGL() correctly? Or what is the correct way to deal with the problem?
Update/Solution using scaling by hand: thanks to the answer of #AlexanderVX, I figured out the scaling solution. At first, I need to know some reference values of DPI in X and Y dimensions. Then I calculate the scaling coordinates based on that and pass them to my widget QtOSGWidget. So, the code of the main() has to contain:
QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
QApplication app(argc, argv);
int x = QApplication::desktop()->physicalDpiX();
int y = QApplication::desktop()->physicalDpiY();
// values 284 and 285 are the reference values
double scaleX = 284.0/double(x);
double scaleY = 285.0/double(y);
QMainWindow window;
QtOSGWidget* widget = new QtOSGWidget(scaleX, scaleY, &window);
// etc.
Then, whenever I refer to the sizing functions that needed to be passed to OpenSceneGraph (OpenGL) content, I have to do scaling, e.g.:
// resizeGL example
this->getEventQueue()->windowResize(this->x()*m_scaleX, this->y() * m_scaleY, width*m_scaleX, height*m_scaleY);
// mouse event example
this->getEventQueue()->mouseButtonPress(event->x()*m_scaleX, event->y()*m_scaleY, button);
Final update: since the target platform of my application is Windows 7-10, it makes much more sense to stick with the proposed answer of #AlexanderV (second part), i.e., to use SetProcessDPIAware() function.

Is there a way to know to what size the scaling will be performed so
that to do resizeGL() correctly?
First, detect the monitor:
// relative to widget
int screenNum = QApplication::desktop()->screenNumber(pWidget);
or maybe
// relative to global screen position
int screenNum = QApplication::desktop()->screenNumber(pWidget->topLeft());
and that gives us pointer to QScreen:
QScreen* pScreen = QApplication::desktop()->screen(screenNum);
from which you can read many screen characteristics, including "physical dot per inch" which makes us able to judge how many pixels there per inch:
qreal pxPerInch = pScreen->physicalDotsPerInch();
Having pixels per inch you will be able to programmatically scale your drawing code. Detect how much is 'normal' density and then scale proportionally against the density detected on physical device. Of course that approach is more suitable for accurate graphics. Be aware of both physicalDotPerInch() and devicePixelRatio(), though.
qreal scaleFactor = pScreen->physicalDotsPerInch() / normalPxPerInch;
Or what is the correct way to deal with the problem?
However, with widgets and normal GUI drawing it is often easier to let Qt / system to scale the entire UI. Qt Documentation: High DPI Displays.
If the OS Windows at least Vista or higher and tuning Qt for high DPI sounds complicated then there is a shortcut that I take and it helps me, though Qt complains in the log: "SetProcessDpiAwareness failed: "COM error 0xffffffff80070005 (Unknown error 0x0ffffffff80070005)"
". I call this function from main() before the event loop: SetProcessDPIAware() and then all the UI looks alike no matter what monitor density is. I use it with Qt 5.5, though. There is also SetProcessDpiAwareness() function, explore. I use SetProcessDPIAware because it is available since Windows Vista but SetProcessDpiAwareness is only available since Windows 8.1. So, the decision may depend on potential clients systems.
A 'shortcut' approach:
int main(int argc, char** argv)
{
// DPI support is on
// QApplication::setAttribute(Qt::AA_EnableHighDpiScaling);
// on Windows?
::SetProcessDPIAware();
// MSDN suggests not to use SetProcessDPIAware() as it is obsolete and may not be available.
// But it works with widgets.
QApplication app(argc, argv);
QMainWindow window;
// QOpenGLWidget with OpenSceneGraph content
QtOSGWidget* widget = new QtOSGWidget();
window.setCentralWidget(widget);
window.show();
return app.exec();
}

Related

Qt: QOpenGLContexts "mix" with differnt widgets

I am recently building a Qt application with native openGL streaming data in the Qt window.
I have 2 widgets inherited from QOpenGLWidget, one has parent, the other doesn't have a parent. They both work well individually (just show() one widget per time). However, when I try to render them simultaneously, one of the texture I bind via glBindTexture() appears in the wrong window. It's like they are using the same context(). But by inheriting from QOpenGLWidget, they should have two different context.
In my code, I just override initializeGL, paintGL and resizeGL as usual
void initializeGL(){
initializeOpenGLFunctions();
// generate buffer, allocation, shaders...
}
void paintGL(){
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// bind vao, bind texture, drawElements...
}
Basically, my second window(window2) is a "video player" plays the image sequence from memory. But it appears on the window1. I have also set a QSurfaceFormat by:
QSurfaceFormat format;
format.setRenderableType(QSurfaceFormat::OpenGL);
format.setVersion(3,3);
format.setProfile(QSurfaceFormat::CoreProfile);
setFormat(format);
in the constructor.
Could someone tell me what might be wrong here? I think the context() the two windows use are different, then how could I glBindTexture in window2 could apply for window1? If you found these information are not enough, please tell me, Thanks.
platform: Ubuntu16.04, Qt5.6.2, OpenGL3.3
Update:
I have the same issue with this post: OpenGL multiple window rendering. However, mine is inside Qt5 environment, theoretically, it should works.

Display image using QImage without using pixmap in Qt?

I have a requirement to read pixel values from the picture displayed on the GraphicScene layout. How can I display image using QImage without using pixmap in Qt so that I am able to read the pixel values?
On most platforms, a QPixmap is a thin wrapper around a QImage. The conversions between the two are cheap - especially the pixmap-to-image conversion. Thus, you can use the QGraphicsPixmapItem and use item->pixmap().toImage() without much worry. To confirm that QPixmap is indeed a wrapper, the following check will do:
bool isPixmapThin(const QPixmap &pix) {
auto const a = pix.toImage();
auto const b = pix.toImage();
return a.bits() == b.bits();
}
In all cases, ensure that the image you take from the pixmap won't detach, i.e. always make it const (as in the code example above).

How to work with QGraphicsScene::addPixmap when it only accepts const QPixmap?

I'd like to display some QImage through QGraphicsScene, my code's very straightforward:
mainwindow.h
QImage *sourceImage;
QGraphicsView *imageView;
QGraphicsScene *imageScene;
mainwindow.cpp
imageScene = new QGraphicsScene;
imageView = new QGraphicsView;
imageView->setScene(imageScene);
sourceImage = new QImage;
sourceImage.load(":/targetimage.png");
imageScene.addPixmap(QPixmap::fromImage(sourceImage));
And then the complier points out exactly what I did wrong: QGraphicsScene::addPixmap accepts only const QPixmap as argument, and I was trying to convert QImage to const QPixmap, which is not allowed because QPixmap::fromImage within only accept const QImage, like a const hell.
The official documentation on this method doesn't make much sense to me either, if I'd like to make for example, an image viewer, and during runtime I'd sure load different images into QImage sourceImage, and how can I accomplish that using a const QImage?
This problem has been agonizing, thanks for any advice. Moreover could you light me a bit if there's any vision on the philosophical reason why guys in Qt make these methods const?
Try
imageScene.addPixmap(QPixmap::fromImage(*sourceImage));
Some advice:
there is no need to allocate the QImage on the heap (using new).
Use:
QImage sourceImage;
Then you do not need to dereference the pointer when calling QPixmap::fromImage
Just to clarify: the constness has nothing to do with the error.

drawText() on a QImage crashes program

I have an image in uint8_t buffer and I am trying to use QImage as a wrapper to write text on the image. I have used drawLine() with no issues, but drawText() crashes the program. The below code is part of a boost thread in which I want to write text unto each image as it iterates through the function. Are there any bugs in Qt I am unaware of?
uint8_t *frameBuffer; // this contains image pixels
QImage img(frameBuffer, sizeX, m_sizeY, QImage::Format_RGB888);
QPainter p(&img);
p.setPen(QPen(Qt::green));
p.setFont(QFont("Times", 10, QFont::Bold));
p.drawLine(img.rect().bottomLeft().x(), img.rect().bottomLeft().y()-10,
img.rect().bottomRight().x(), img.rect().bottomRight().y()-10); //works!
p.drawText(img.rect(), Qt::AlignCenter, "Help"); //crashes program
My project was set to a QCoreApplication (I had no GUI). Changing it to QApplication did the trick!
Just a guess... (I've never seen this error before, but have had other font issues on threads.)
Font rendering on background threads can be a little flaky in Qt, depending on how it was compiled. Check the value of QFontDatabase::supportsThreadedFontRendering on your system.
Note the documentation:
Returns true if font rendering is supported outside the GUI thread,
false otherwise. In other words, a return value of false means that
all QPainter::drawText() calls outside the GUI thread will not produce
readable output.

QGLWidget and hardware acceleration?

Greetings all,
Does simply subclassing QGLWidget and reimplementing paintEvent() make use of OpenGL and hardware acceleration?
I create a QPainter and draw QImages in this paintEvent().
What happen inside the paintEvent() method of QGLWidget? Does it convert the images(QImage,QPixmap) into OpenGL textures?
Does it use hardware acceleration for image scaling?
Thanks in advance,
umanga
Take a look at http://doc.qt.io/archives/4.6/opengl-2dpainting.html for an instructive example, where you can also find the following quote: "it is possible to re-implement its [QGLWidget] paintEvent() and use QPainter to draw on the device, just as you would with a QWidget. The only difference is that the painting operations will be accelerated in hardware if it is supported by your system's OpenGL drivers."
So, the answer to your first question is yes.
For figuring out the exact details of the implementation, let's take a quick peek at a piece of source-code from QOpenGLPaintEngine (which can be found by searching the internet) :
void QOpenGLPaintEngine::drawImage(const QRectF &r, const QImage &image,
const QRectF &sr, Qt::ImageConversionFlags)
{
Q_D(QOpenGLPaintEngine);
if (d->composition_mode > QPainter::CompositionMode_Plus
|| d->high_quality_antialiasing && !d->isFastRect(r))
d->drawImageAsPath(r, image, sr);
else {
GLenum target = (QGLExtensions::glExtensions
& QGLExtensions::TextureRectangle)
? GL_TEXTURE_RECTANGLE_NV
: GL_TEXTURE_2D;
if (r.size() != image.size())
target = GL_TEXTURE_2D;
d->flushDrawQueue();
d->drawable.bindTexture(image, target);
drawTextureRect(image.width(), image.height(), r, sr, target);
}
}
This answers your question regarding QImages, they are indeed drawn using textures.
Yes, if you use GL commands inside a QGLWidget, inside the paintGL, resizeGL and initializeGL methods, you will get full hardware acceleration (if available).
Also seems that using QPainter in a QGLWidget also gets HW acceleration, since there's a OpenGL QPainEngine implementation, you can read about that here.

Resources