I have a Qt 4.8.5 application. The top level widget is derived from QMainWindow. The central widget (QMainWindow::setCentralWidget) is derived from QTableView. The data model for the view is derived from QAbstractTableModel.
The data model changes dynamically during the life of the application, without human intervention. The application GUI is viewed on a wall screen in a control room for multiple people to see, and typically nobody directly interacts with the GUI.
The changes to the data model are significant including dynamic table rows and columns. The application window needs to size to fit the QTableView every time QTableView::setModel is called to install a new data model, such that a minimal footprint is maintained. Out of the box behavior is for the window to take on an initial size that will not change when a new data model is introduced. My various attempts to program a window that grows and shrinks dynamically have all failed.
Additional information ...
This is not a new application, but one that has been in use for a number of years. The technique that made it work was indeed the computation of size and setting fixed size widgets. I decided to revisit this app due to some scenarios where the size estimate was wrong. These scenarios are not the main use case of the application, but cases where individual users ran the app on their local workstations, not the control room wall.
I decided to focus on why different use cases produced different results, learning several interesting things. The sizing calculation accuracy was impacted by from where the app was executed. Directly logged into a Linux workstation always worked. Failures had to do with remote login: windows->putty/exceed, Linux 'ssh'; all dependent on remote host architectures.
It turns out the GUI style used by Qt impacts the fidelity of my size estimate. Since the app did not explicitly set style, the style used varied in the scenarios mentioned above. The size estimate inaccuracy was solved by choosing an explicit style in the application code.
Related
I am creating a text editor application using Qt. This application needs to update many texts in the area.
I investigated the application performance and found that increasing the application window size significantly reduced the key repeat rate. That is, for example, the operation of scrolling the drawing area by continuing to input a key becomes extremely slow as the application window size increases. The cause of this problem is that the Update() function itself, which updates the entire application widget, appears to have a significant effect, rather than the cost of a lot of text rendering.
I wrote a simple application to check this problem.
This application draws a random rectangle on the application by any key input, and outputs the key repeat rate to the standard output.
https://github.com/akiyosi/testguiapp/blob/master/main.go
This drawing speed (that is, the speed of key repeat) decays as the application window size increases.
On my laptop (MacBook Pro Late 2013), the application can achieve 60fps with window size less than one-third of the screen, but attenuates to about 40fps with more than half of the screen.
Is there a way to keep the key repeat rate unaffected by the widget's Update()?
I am writing an application and there will potentially be tens of thousands of labels (a log-viewing application of a sort), most of them hidden with QWidget::hide(). I imagine a QLabel, when created, takes up some video memory. Now, does hide() free that video memory? Or will I have to QWidget::remove() most of those hidden labels to keep video memory usage at a reasonable level?
In general, most widgets do not store their pre-rendered images in memory. Instead, they are render themselves on demand after being invalidated. However, some do it if render is time-consuming. Took a look at QLabel source code (http://code.qt.io/cgit/qt/qtbase.git/tree/src/widgets/widgets/qlabel.cpp), it seems that QLabel caches its pixmap when scaledContents is enabled and scaling is necessary. Plain text-only labels are painted as-is without any caching.
Still, as #G.M mentioned, each widget consumes some system memory to store its own data, and processing time due to event handling, so producing 10k labels is a reasonable resource waste. In contrast, item views are single widgets that draws items on their surface. No event handling overhead, no unnecessary caches. Like QLabels, item view items are perfectly stylable, see http://doc.qt.io/archives/qt-5.8/stylesheet-examples.html#customizing-qlistview, http://doc.qt.io/archives/qt-5.8/stylesheet-examples.html#customizing-qtreeview for details. More complex looks like multi-line list items are achievable with QItemDelegate: Qt QListWidgetItem Multiple Lines
I need to render lots (hundreds) of similar spheres and cylinders with different transforms. Currently this is achieved by creating hundreds of identical QEntity objects. The result is the app's constantly consuming 20..70% of CPU -- even when the scene is still.
Is there a default source of update events for the widget? If there is one, can I turn it off or reduce its frequency? There seems to be no other source of CPU load but the Qt3D widget.
Did you have a look at the enum of the QRenderSettings class? It seems like you can set the render policy to OnDemand which causes Qt to only render the scene when something changes.
Alternatively, if you don't need interaction with the scene you could have a look at my implementation of an offline renderer. The underlying QAspectEngine starts and stops whenever you set a root entity. You could set your root entity, capture the frame and unset the root entity, causing the graphics loop to stop which would result in less CPU load I would guess.
I am trying to implemente a CoverFlow like effect using a QGLWidget, the problem is the texture loading process.
I have a worker (QThread) for loading images from disk, and the main thread checks for new loaded images, if it finds any then uses bindTexture for loading them into QGLContext. While the texture is being bound, the main thread is blocked, so I have a fps drop.
What is the right way to do this?
I have found that the default behaviour of bindTexture in Qt4 is extremelly slow:
bindTexture(image,target,format,LinearFilteringBindOption | InvertedYBindOption | MipmapBindOption)
using only the LinearFilteringBindOption in the binding options speeds up the things a lot, this is my current call:
bindTexture(image, GL_TEXTURE_2D,GL_RGBA,QGLContext::LinearFilteringBindOption);
more info here : load time for a 3800x2850 bmp file reduced from 2 seconds to 34 milliseconds
Of course, if you need mipmapping, this is not the solution. In this case, I think that the way to go is Pixel Buffer Objects.
Binding in the main thread (single QGLWidget solution):
decide on maximum texture size. You could decide it based on maximum possible widget size for example. Say you know that the widget can be at most (approximately) 800x600 pixels and the largest cover visible has 30 pixels margins up and down and 1:2 aspect ratio -> 600-2*30 = 540 -> maximum size of the cover is 270x540, e.g. stored in m_maxCoverSize.
scale the incoming images to that size in the loader thread. It doesn't make sense to bind larger textures and the larger it is, the longer it'll take to upload to the graphics card. Use QImage::scaled(m_maxCoverSize, Qt::KeepAspectRatio) to scale loaded image and pass it to the main thread.
limit the number of textures or better time spent binding them per frame. I.e. remember the time at which you started binding textures (e.g. QTime bindStartTime;) and after binding each texture do:
if (bindStartTime.elapsed() > BIND_TIME_LIMIT)
break;
BIND_TIME_LIMIT would depend on frame rate you want to keep. But of course if binding each one texture takes much longer than BIND_TIME_LIMIT you haven't solved anything.
You might still experience framerate drop while loading images though on slower machines / graphics cards. The rest of the code should be prepared to live with it (e.g. use actual time to drive animation).
Alternative solution is to bind in a separate thread (using a second invisible QGLWidget, see documentation):
2. Texture uploading in a thread.
Doing texture uploads in a thread may be very useful for applications handling large amounts of images that needs to be displayed, like for instance a photo gallery application. This is supported in Qt through the existing bindTexture() API. A simple way of doing this is to create two sharing QGLWidgets. One is made current in the main GUI thread, while the other is made current in the texture upload thread. The widget in the uploading thread is never shown, it is only used for sharing textures with the main thread. For each texture that is bound via bindTexture(), notify the main thread so that it can start using the texture.
There are lot of UI frameworks available for developing UI for Linux Based OS on various embedded Boards running linux.I have jotted down the few app requirements for that.
Application Requirement :
1>A UI depicting 4-5 tables showing dynamic values which keep on changing with time (~after very 5 secs)
2>The data can be depicted in the graphical format( a line graph or a bar graph depicting values).There will be a separate Tab for it.The Graph can also show the dynamic values getting changed - Auto refresh kind of display like in stock market applications.
3>Event based Alarm(Audible or Visible) which canbe triggered on the basis of these dynamic values, eg say if one value crosses X or is in between Y and Z the even will trigger the Alarm.
4>The Ability of the UI components to take the value from the System Layer ( like JNI interfaces in Android)
5> Ability to port it on multiple platform running linux - Embedded Boards
Now I have the following choices to develop the application.I am giving scores for each of the above criteria 1-5 to each one of them.(10 being the best , 1 being the least , 5 X 10 = 50 is the ideal score , but i am looking for the average 40 value , priority is more on Graphical display capability - Point 2)
QT
GTK
PyGTK
Using Both Python and GTK
Develop using Android UI Framework, in case if I decide to use my UI application for Android.
C++ GUI
Can some one please tell me why I must use one and why not other.I am thinking to develop the application using the GTK as for now.
Plz assert/de-assert my decision.
Rgds,
Softy
I can't tell you which to choose, but your analysis method can be improved. I suggest that you use the Kepner-Tregoe Decision Analysis method.
This is essentially the same as what you have suggested except that each requirement is given a weight reflecting the importance to be placed on that factor. That way you give each attribute a score on its own merits rather than having to factor in how important it is in relation to other attributes - the weights make that part of the process separate and independent. Then the score for each requirement is n x weight. The total score, and maximum possible score are then largely irrelevant, it is the relative scoring that is important. However I suggest that you reduce the range to 0-5 for score and 1-5 for weights (a weight of zero would mean the factor is not a consideration, so would be redundant), otherwise you may find that the distinction between options is not clear.
While the allocation of weights and scores is still more-or-less subjective, the separation of merit from importance typically render the result far less subjective than if you attempt to factor both into a single score.