Open GL performance on QT - qt

I am using the QT 4.8 with Open GL ES 2.0 (created my set of shaders etc...)
Running the entire application from within QGraphicsView that owns a QGraphicsScene, I get relatively low fps although I am rendering a simple model (around 30 fps).
The scene render itself is triggered by a timer that fires at 50Hz (that can be changed).
What would be the most time consuming sections to attend in order to improve the frame rate?
Also, I noticed (using gDebugger) that the functions and states associated with the GL_STENCIL_TEST and GL_SCISSOR_TEST are killer consumers here (over 30% of the time goes on that). Is there anyway to bypass that?
Thanks.

Related

The main thread is blocked by the redrawing by update() on qwidget causing the detection of Qkeyevent to be delayed in Qt

I am creating a text editor application using Qt. This application needs to update many texts in the area.
I investigated the application performance and found that increasing the application window size significantly reduced the key repeat rate. That is, for example, the operation of scrolling the drawing area by continuing to input a key becomes extremely slow as the application window size increases. The cause of this problem is that the Update() function itself, which updates the entire application widget, appears to have a significant effect, rather than the cost of a lot of text rendering.
I wrote a simple application to check this problem.
This application draws a random rectangle on the application by any key input, and outputs the key repeat rate to the standard output.
https://github.com/akiyosi/testguiapp/blob/master/main.go
This drawing speed (that is, the speed of key repeat) decays as the application window size increases.
On my laptop (MacBook Pro Late 2013), the application can achieve 60fps with window size less than one-third of the screen, but attenuates to about 40fps with more than half of the screen.
Is there a way to keep the key repeat rate unaffected by the widget's Update()?

Is there a maximum number of QTimer instances?

Is there an upper limit for QTimer instances?
I am implementing the game Bomberman and I was thinking each bomb gets its own timer, reaching from 4 to 5 seconds. While there could be up to 8 players, each may have 10 bombs, we are talking roughly about 100 timers.
Should I keep track of the timings by myself or use a timer per bomb?
Please keep in mind that one detonation may trigger others.
I am not aware of any such limits, and it wouldn't make sense either. 100 timers shouldn't be a problem for any platform, supported by Qt. The timer precision however will vary with the platform and the amount of load on the event loop.
I'd say go for the simple solution, and only dig into a more complex solution if you experience performance issues.
Obviously, anything other than a trivial game will implement its own blocking game loop, rather than relying on Qt's events, keeping the time and managing all game objects. Qt and its classes are for application development, and while they could be useful for a simple, trivial game, if you are into game making you really need a game engine, be that a third party or something you make yourself.
You can create a class for the bombs with a QTimer as an attribute.
When you create a bomb automatically the timer starts.

How can I make a QT app displaying very large amount of data with low memory usage?

(it's a Sysinternal's-Process-Monitor-like system monitor program based on QT 5.7.0 which could monitor and record most behaviors of process in the system.
program view
memory usage
As you can see it cost 100MB+ memory when there are 30000+ events recorded.
and the memory usage could be easily increased to 1.0GB, even 2.0GB when there are more events, that's unacceptable in low performance machine. shall I save these events in sql database?
I use a QTableView with a custom model inherit from QAbstractTableModel attached to it, which displays only visible items. the memory issue is not caused by UI, since it cost so much memory even if I remove the tableview.

Qt4/Opengl bindTexture in separated thread

I am trying to implemente a CoverFlow like effect using a QGLWidget, the problem is the texture loading process.
I have a worker (QThread) for loading images from disk, and the main thread checks for new loaded images, if it finds any then uses bindTexture for loading them into QGLContext. While the texture is being bound, the main thread is blocked, so I have a fps drop.
What is the right way to do this?
I have found that the default behaviour of bindTexture in Qt4 is extremelly slow:
bindTexture(image,target,format,LinearFilteringBindOption | InvertedYBindOption | MipmapBindOption)
using only the LinearFilteringBindOption in the binding options speeds up the things a lot, this is my current call:
bindTexture(image, GL_TEXTURE_2D,GL_RGBA,QGLContext::LinearFilteringBindOption);
more info here : load time for a 3800x2850 bmp file reduced from 2 seconds to 34 milliseconds
Of course, if you need mipmapping, this is not the solution. In this case, I think that the way to go is Pixel Buffer Objects.
Binding in the main thread (single QGLWidget solution):
decide on maximum texture size. You could decide it based on maximum possible widget size for example. Say you know that the widget can be at most (approximately) 800x600 pixels and the largest cover visible has 30 pixels margins up and down and 1:2 aspect ratio -> 600-2*30 = 540 -> maximum size of the cover is 270x540, e.g. stored in m_maxCoverSize.
scale the incoming images to that size in the loader thread. It doesn't make sense to bind larger textures and the larger it is, the longer it'll take to upload to the graphics card. Use QImage::scaled(m_maxCoverSize, Qt::KeepAspectRatio) to scale loaded image and pass it to the main thread.
limit the number of textures or better time spent binding them per frame. I.e. remember the time at which you started binding textures (e.g. QTime bindStartTime;) and after binding each texture do:
if (bindStartTime.elapsed() > BIND_TIME_LIMIT)
break;
BIND_TIME_LIMIT would depend on frame rate you want to keep. But of course if binding each one texture takes much longer than BIND_TIME_LIMIT you haven't solved anything.
You might still experience framerate drop while loading images though on slower machines / graphics cards. The rest of the code should be prepared to live with it (e.g. use actual time to drive animation).
Alternative solution is to bind in a separate thread (using a second invisible QGLWidget, see documentation):
2. Texture uploading in a thread.
Doing texture uploads in a thread may be very useful for applications handling large amounts of images that needs to be displayed, like for instance a photo gallery application. This is supported in Qt through the existing bindTexture() API. A simple way of doing this is to create two sharing QGLWidgets. One is made current in the main GUI thread, while the other is made current in the texture upload thread. The widget in the uploading thread is never shown, it is only used for sharing textures with the main thread. For each texture that is bound via bindTexture(), notify the main thread so that it can start using the texture.

Updating QGLWidget - event or signal/slot?

I need to flash some images with very precise timing (order of milliseconds) for which I developed a subclass of QGLwidget. The images are loaded as textures at initialization. I am using a QTimer instance to change the image being displayed. The timer's timeOut signal is connected to a timeOutSlot which does some file I/O and then calls updateGL().
While I understand that event handlers are for external events and signal/slot is for communication internal to the GUI, I could also implement this as a timeOutEvent handler.
Is there any performance difference between the two? Any performance penalty above 2-3 milliseconds is important to me (hardware is average, say Intel core 2 duo T5700 with nVidia 8600M GT graphics card).
Signals and slots are about 10 x slower than the plain old function calls. But they are definitely not so slow that they would take milliseconds to process. The time to process one signal is about 0.001 ms (see slide 27).
You say that you are requiring a very precise timing, so are you aware how the display refresh rate affects the drawing? Image is (usually) drawn using 60 Hz refresh rate. The time between images is 16.7 ms so that is the maximum accuracy you can get.
I would say signal/slot because events are added to an event queue where Qt often does call optimisations and importance ordering, whilst s/s are executed immediately - albeit slower than direct calls.

Resources