Mouse pointer lags in QT5 app on TI Sitara - qt

We are using TI Sitara AM33 system on chip with 600 Mhz clock and 256 Mb ram.
OS is OE Yocto v2.1 Krogoth, kernel 4.4.19. Video driver - DRM/KSM
We are having issues with mouse performance.
I have made a little vedio to demonstrate the effect:
https://www.youtube.com/watch?v=5dRDGzhcnn0
Note how mouse pointer is moving smoothly on the blank area of the window and lags over at controls. It's as if it is going through jelly. If you have more controls on the window, mouse becomes so laggy, it's unusable. CPU load is minimal though.
There could be no error in the example app in the vedio - we created a blank QT Widget project, put the controls on the form and that's it, it is not doing anything else at all.
Has anyone seen such mouse issues?

If you're not using an X server, then you need to check what platform plugin is Qt using on your platform. Perhaps that plugin is broken or not the best choice in your situation.
Your application is also very unlikely to use GPU in any capacity other than to composite the windows (if at all), so the CPU load being low is rather telling.
It seems as if the event dispatch system on your platform was very slow the more widgets there are. This is unlikely to have much to do with the graphics side of things. In a process of elimination perhaps you could first benchmark the performance of synchronization primitives (QBasicMutex and QMutex) and atomic integers and pointers to ensure that they are configured correctly for your platform.

Related

OpenGL context conflict between Qt and a third party game engine - how to resolve?

I am trying to use an external to Qt OpenGL context using a window handle that comes from Qt. The setup is:
A QMainWindow - contains various widgets including a QWebEngineView to make some web content available (in my case it's Leaflet for render and interact with OpenStreetMaps tiles)
Panda3D engine - rendered on top of my Qt application using a window handle of the central QWidget.
The setup works...when Panda3D is set to DirectX9 (aka pandadx9). When I switch the pipeline to OpenGL (aka pandagl) I get a black screen (Panda3D's window) and a very glitch OpenGL content (Qt). The reason is simple yet beyond my ability to fix it - QWebEngineView uses OpenGL. Somehow there is a conflict on a OpenGL context level between the engine and Qt. I am looking for a way to resolve this without removing the direct interaction with Panda3D's window (in my case using ShowBase) since the engine already offers a lot in terms of features for handling mouse events that I would otherwise be forced to reimplement in Qt and pass down to the engine. In addition I am not sure if I can make Panda3D render its scene as an FBO and how to load it in a - let's say - QOpenGLWidget. Activating shared OpenGL context before initializing QApplication allows multiple OpenGL widgets to render OpenGL content.
So far I have experimented to integrate Panda3D with Qt in two ways:
run two event loops in parallel - start the Panda3D engine in a child process and then using a pipe communicate with it
run a single event loop - use the event loop of Panda3D engine to also handle the main event loop of Qt by adding a task to the engine's task manager to run QApplication::processEvents() on every cycle
In both cases I am handing over a window Id (QWidget::winId()) as the parent window of Panda3D.
CASE 1 - parallel processes, separated event loops
This resolution comes with a lot of overhead. All the communication between the Qt content (running in the parent process) needs to be sent (via pipe hence involving IPC) to the engine (running in the child process). This adds a lot of code complexity and in case of my logging (using Python's logging module with a custom logging handler that writes records into an SQLite3 database) introduces a whole lot of issues. Concurrent write access to a file between processes is a tricky thing in general and I'm definitely not an expert. This case however does not exhibit the behaviour I'm describing below. In this case however the issue with OpenGL is not present!
CASE 2 - single process, single event loop
In my opinion this is a more elegant and is what I would like to go with (if possible). An example can be found here. I use the engine's main loop to process Qt's main loop. This is due to the fact that a 3D game engine usually would need to deal with far more events for a shorter period of time (processing rendering, audio, video, filesystem access, physics and so on) then a standard Qt GUI. This is also the recommended way as described in the official documentation of the engine. The other way around (Qt's main loop handling Panda3D's) is also possible. Imho neither have anything to do with my issue namely the moment I add anything Qt-ish that uses OpenGL, the problem I've described above occurs. On Windows this is not a huge deal breaker since I can also use DirectX for the engine, while Qt does it's OpenGL thing. On Linux it is not possible (without something like wine). In addition I want to use exclusively OpenGL including GLSL.
Here is a visual representation of case 1 and case 2 (but with mixed DX9 and OpenGL):
And below there are two visual representations of case 2 with only OpenGL:
While Panda3D offers CPU rendering (aka p3tinydisplay), QWebEngineView does not. Falling back to CPU rendering on the engine side is not an option considering the huge amount of polygons I have to render not to mention the fact that I can do something more useful with the CPU (e.g. processing the physics).
Last but not least I have seen a third integration attempt, which I quickly discarded - rendering the scene as an image to RAM, reading it in Qt, generating a QPixmap from it and painting on top of a QLabel. Needless to say this is a no go for my scenario due to the heavy hit on performance among others.
Any ideas how to tackle this?
I don't think event loops have to do with anything here; the problem is that by default child windows get the same device-context (DC) as the parent. In your case that's a problem because two different components (Qt framework, and Panda3D engine) try to ChoosePixelFormat and initialize OpenGL context twice on the same DC, which is not supported.
The proper solution is to create the Panda3D engine QWidget from a QWindow with the Qt::MSWindowsOwnDC style, which corresponds to the CS_OWNDC window-class style. Normally QWidget doesn't create any window at all -- but is rather implemented entirely within the Qt framework by drawing itself on the parent window.

QGLWidget is slower than QWidget

The problem mainly is determined in the title. I tried out the Qt's example (2dpainting) and noticed, that the same code consumes more CPU power if I try to draw on QGLWidget and less if I try to draw simply on QWidget. I thought that the QGLWidget should be faster. And one more interesting phenomenon: In QGLWidget the antialiasing hint seems to be ignored.
OpenGL version: 3.3.0
So why is that?
Firstly, note this text at the bottom of the documentation that you link to:
The example shows the same painting operations performed at the same
time in a Widget and a GLWidget. The quality and speed of rendering in
the GLWidget depends on the level of support for multisampling and
hardware acceleration that your system's OpenGL driver provides. If
support for either of these is lacking, the driver may fall back on a
software renderer that may trade quality for speed.
Putting that aside, hardware rendering is not always guaranteed to be faster than software rendering; it all depends upon what the renderer is being asked to do.
An example of where software can exceed hardware is if the goal of the item being rendered is constantly changing. So, if you have a drawing program that draws a line being created by the mouse being constantly moved and it is implemented by adding points to a painter path that is drawn every frame, a hardware renderer will be subject to constant pipeline stalls as new points are added to the painter path. Setting up the graphics pipeline from a stall takes time, which is not something a software renderer has to deal with.
In the 2dPainting example you ask about, the helper class, which performs the paint calls, is doing a lot of unnecessary work; saving the painter state; setting the pen / brush; rotating the painter; restoring the brush. All of this is a bigger overhead in hardware than software. To really see hardware rendering outperform software, pre-calculating the objects' positions outside of the render loop (paint function) and then doing nothing put actually rendering in the paint function is likely to display a noticeable difference here.
Finally, regarding anti-aliasing, the documentation that you linked to states: "the QGLWidget will also use anti-aliasing if the required extensions are supported by your system's OpenGL driver"

Qt & OpenGL: how to render as frequent as possible?

I'm porting over a OpenGL application from a win32 project to a Qt project.
In the win32 project I simply had a mainloop that kept executing the next frame as soon as the previous was finished.
In the tutorials I find about Qt and OpenGL, they're using timers to control the update of the scene. However, I'd like to render as many frames as frequent as possible - how to do that?
However, I'd like to render as many frames as frequent as possible - how to do that?
Not with Qt. Qt takes full control of the event loop thus not giving you any means of directly performing idle operations. The standard way to perform idle operations in Qt is to use a QTime with a timeout of 0.
Another possibility was using a second thread just for OpenGL operations. However to work reliably you should create a dedicated OpenGL context for that one thread.
Qt simply is not the right toolkit for those kinds of program where the primary task of the main loop is rendering pictures and input and event management are only secondary.
Try calling update() at the end of your paint handler.
That will queue up another repaint event.
The intrinsic principles of Qt and OpenGL are not about rendering as soon as possible, but it's rendering whenever we want (huge difference).
You have to ask for the system max framerate (search for samples in OpenGL i'm sorry I don't rememember where I saw that). Once you've done that, just create a timer based on this max framerate, so you'll get what you want.
However, I'd like to render as many frames as frequent as possible - how to do that?
Use QTimer with zero interval.

Can I draw something on window, that does not belongs to me, using opengl?

I heard you can hook window handle and use this window as OpenGL canvas. I know how to hook windows, but I can't find how can I draw on this window.
PS. I'm using Qt if it helps.
OpenGL contexts are only usable in one thread at a time and are bound to processes. So what that required was creating a OpenGL context of a foreign process' resource.
On Windows using some very quircky hacks this was possible in at least WinXP (I don't know about Vista or 7); this usually includes making large portions of the process memory shared.
On X11/GLX it's a lot easier by creating the context as indirect rendering context (unfortunately OpenGL-3 has not complete indirect GLX specification, for some shady excuses of reasons); indirect contexts can be accessed from multiple processes.
In any case both processes must cooperate to make this work.
Qt has some NativeWindow hacks you can play with a bit.
On Windows you can use findWindowEx to get a HWND and interrogate its geometry, then position your own window on top of it.
You really shouldn't be able to interfere with another process's windows arbitrarily -- it's a security hazard.

OpenGL Threaded Tile Texture Loading with Qt 4.5 / 4.6

I am trying to develop am map application for scientific purposes at my university. Therefor I got access to a lot of tiles (256x256). I can access them and save them to an QImage in a seperate QThread. My Problem is, how can I actually manage to load the QImage into a texture within the seperate QThread (not the GUI main thread)? Or even better give me a Tipp how to approach this problem.
I though about multithreaded OpenGL but I also require OpenGL picking and I did not fall over anything usefull for that.#
Point me to any usefully example code if you feel like, I am thankfull for everything that compiles on Linux :)
Note1: I am using event based rendering, so only if the scene changes it gets redrawn.
Note2: OSG is NOT an option, it's far to heavy for that purpose, a leightweight approach is needed.
Note3: The Application is entirely written in C++
Thanks for any reply.
P.S. be patient, I am not that adavanced as this Topic may (or may not) suggest.
OpenGL is not thread-safe. You can only use one GL context in one thread at a time. Depending on OS you also have to explicitely give up on the context handle in one thread to use it in another.
You cannot speed up the texture loading by threading given that the bottleneck here is the bandwidth to the graphics card.
Let your delivery thread(s) that load the tiles fill up a ring buffer. The GL thread feeds from the ring buffer. With two mutexes it is easy to control the ring buffer to make this thread-safe operation.
That would be my suggestion.
Two tricks I use to speed things up:
pixel buffer objects: map GPU memory so the loading thread can write directly to gpu;
sync objects: with a sync object I know when the texture is really ready to be used (glTexImage2D with PBO is async so there is no guarantee the texture is ready to be binded, ie, when binding a texture, it blocks if DMA didn't finish updating texture data)

Resources