Improve performance of QWebEngine offscreen rendering - qt

I want to use QWebEngine to render an animated webpage offscreen (with transparency). And I want to do it fast (e.g. 60 fps, which imposes ~16 ms maximum time per render).
This other SO question Is it possible to render QWebEnginePage/QWebEngineView offscreen? gives some hints:
get a QWebEngineView with Qt::WA_DontShowOnScreen attribute
get a QPainter on a QImage (with QImage::Format_ARGB32)
call QWidget::render() on the said QWebEngineView instance
This works! But it's quite slow when the transparency (QImage::Format_ARGB32) is needed. By slow I mean >50-60 milliseconds per render on my machine (which is an equvalent of <20fps).
The question is: is there a faster, Qt-based alternative to rendering a webpage offscreen (to memory)?

Related

The main thread is blocked by the redrawing by update() on qwidget causing the detection of Qkeyevent to be delayed in Qt

I am creating a text editor application using Qt. This application needs to update many texts in the area.
I investigated the application performance and found that increasing the application window size significantly reduced the key repeat rate. That is, for example, the operation of scrolling the drawing area by continuing to input a key becomes extremely slow as the application window size increases. The cause of this problem is that the Update() function itself, which updates the entire application widget, appears to have a significant effect, rather than the cost of a lot of text rendering.
I wrote a simple application to check this problem.
This application draws a random rectangle on the application by any key input, and outputs the key repeat rate to the standard output.
https://github.com/akiyosi/testguiapp/blob/master/main.go
This drawing speed (that is, the speed of key repeat) decays as the application window size increases.
On my laptop (MacBook Pro Late 2013), the application can achieve 60fps with window size less than one-third of the screen, but attenuates to about 40fps with more than half of the screen.
Is there a way to keep the key repeat rate unaffected by the widget's Update()?

Video memory usage of QLabel and the effects of QWidget::hide()

I am writing an application and there will potentially be tens of thousands of labels (a log-viewing application of a sort), most of them hidden with QWidget::hide(). I imagine a QLabel, when created, takes up some video memory. Now, does hide() free that video memory? Or will I have to QWidget::remove() most of those hidden labels to keep video memory usage at a reasonable level?
In general, most widgets do not store their pre-rendered images in memory. Instead, they are render themselves on demand after being invalidated. However, some do it if render is time-consuming. Took a look at QLabel source code (http://code.qt.io/cgit/qt/qtbase.git/tree/src/widgets/widgets/qlabel.cpp), it seems that QLabel caches its pixmap when scaledContents is enabled and scaling is necessary. Plain text-only labels are painted as-is without any caching.
Still, as #G.M mentioned, each widget consumes some system memory to store its own data, and processing time due to event handling, so producing 10k labels is a reasonable resource waste. In contrast, item views are single widgets that draws items on their surface. No event handling overhead, no unnecessary caches. Like QLabels, item view items are perfectly stylable, see http://doc.qt.io/archives/qt-5.8/stylesheet-examples.html#customizing-qlistview, http://doc.qt.io/archives/qt-5.8/stylesheet-examples.html#customizing-qtreeview for details. More complex looks like multi-line list items are achievable with QItemDelegate: Qt QListWidgetItem Multiple Lines

Qt4/Opengl bindTexture in separated thread

I am trying to implemente a CoverFlow like effect using a QGLWidget, the problem is the texture loading process.
I have a worker (QThread) for loading images from disk, and the main thread checks for new loaded images, if it finds any then uses bindTexture for loading them into QGLContext. While the texture is being bound, the main thread is blocked, so I have a fps drop.
What is the right way to do this?
I have found that the default behaviour of bindTexture in Qt4 is extremelly slow:
bindTexture(image,target,format,LinearFilteringBindOption | InvertedYBindOption | MipmapBindOption)
using only the LinearFilteringBindOption in the binding options speeds up the things a lot, this is my current call:
bindTexture(image, GL_TEXTURE_2D,GL_RGBA,QGLContext::LinearFilteringBindOption);
more info here : load time for a 3800x2850 bmp file reduced from 2 seconds to 34 milliseconds
Of course, if you need mipmapping, this is not the solution. In this case, I think that the way to go is Pixel Buffer Objects.
Binding in the main thread (single QGLWidget solution):
decide on maximum texture size. You could decide it based on maximum possible widget size for example. Say you know that the widget can be at most (approximately) 800x600 pixels and the largest cover visible has 30 pixels margins up and down and 1:2 aspect ratio -> 600-2*30 = 540 -> maximum size of the cover is 270x540, e.g. stored in m_maxCoverSize.
scale the incoming images to that size in the loader thread. It doesn't make sense to bind larger textures and the larger it is, the longer it'll take to upload to the graphics card. Use QImage::scaled(m_maxCoverSize, Qt::KeepAspectRatio) to scale loaded image and pass it to the main thread.
limit the number of textures or better time spent binding them per frame. I.e. remember the time at which you started binding textures (e.g. QTime bindStartTime;) and after binding each texture do:
if (bindStartTime.elapsed() > BIND_TIME_LIMIT)
break;
BIND_TIME_LIMIT would depend on frame rate you want to keep. But of course if binding each one texture takes much longer than BIND_TIME_LIMIT you haven't solved anything.
You might still experience framerate drop while loading images though on slower machines / graphics cards. The rest of the code should be prepared to live with it (e.g. use actual time to drive animation).
Alternative solution is to bind in a separate thread (using a second invisible QGLWidget, see documentation):
2. Texture uploading in a thread.
Doing texture uploads in a thread may be very useful for applications handling large amounts of images that needs to be displayed, like for instance a photo gallery application. This is supported in Qt through the existing bindTexture() API. A simple way of doing this is to create two sharing QGLWidgets. One is made current in the main GUI thread, while the other is made current in the texture upload thread. The widget in the uploading thread is never shown, it is only used for sharing textures with the main thread. For each texture that is bound via bindTexture(), notify the main thread so that it can start using the texture.

Qt, low cost way to display only part of large QImage

I draw frequency spectrum of WAV file inside QImage (example: http://savepic.net/2350314.jpg). The WAV file may be long enough to not fit into screen considering good time resolution.
I need to be able to srcoll through entire file fast enough, possibly without filesystem reading operations.
So i have to keep large QImage in memory for fast scrolling. Another desigion would be slower, because it would require me to redraw QImage (QImages) every time user scrolls a screen.
Assuming the desigion with keeping large QImage in memory (1024x50000, for example) i must be able to display some part of that large QImage in the program window.
What is the solution with lowest cost? Using QScrollArea or maybe using QPainter method drawImage() with offset arguments?
I would definitely build a small custom widget and reimplement its paint() method with a QPainter (and scrolling with offsets etc).
Using QPixmap for showing the needed parts of the image should be faster then natively drawing (a part of) a QImage.

How to render a SlimDX scene directly to a GDI bitmap

Is there a way to set the render target to a GDI bitmap in SlimDX so that as soon as the scene is rendered I can immediately BitBlt the render out of there for processing in another thread and continue rendering?
Is it necessary to render to a texture and then copy the contents out to the bitmap? I would like to be able to do this without any unnecessary copying. I'm going to need every speedup I can get.
Sorry, you do need to render to a RenderTarget then copy that resource into a Texture2D then you can map the data and get the pixels into your bitmap.
The memory for RenderTargets is marked for a special kind of use by the graphics card and cannot be read from directly
The memory for Textures can be marked so that it can be read but only through the API as it is still held on the graphics card (some exceptions but DirectX has to go with the lowest common denominator)
If you need the extra speed reuse the same bitmap or have an array of prepared bitmaps ready to fill and keep them on rotation.
And as ever, measure how much time these things are consuming with a profiler so that you can quantify bottlenecks.

Resources