I'm using a QTreeView/QFileSystemModel pairing to display directory contents. I'm loading a large directory, with tens of thousands of files and subdirectories in it. When I issue an expandAll() call, the tree takes a long time to fully open - on the order of 30 seconds or so. I'd like to put up a "Please Wait..." dialog until the tree is completely expanded.
The expandAll() function, though, returns almost immediately. I am looking for a signal that's emitted when the tree is fully expanded, but there doesn't seem to be one. The 'expanded' signal is emitted only for a specific index, and I can't tell which index I should use to indicate complete expansion.
Related
I created dynamic loading model that supplies data to a modified qTreeView. Everything seemed to work fine. Until I realized that sometimes the GUI "waits" for something.
It showed up that fetchMore of my model is called every now and then. I found out that there is "fetchMoreTimer" running in the QAbstractItemView (base class of most views)
rowInserted AND/OR updateGeometires gets called and starts the timer.
when the timer ticks it triggers timerEvent
timerEvent results in doItemsLayout()
doItemsLayout() ask canFetchMore (it sure can! I am dynamically loading millions of rows) and so the model fetches more.
And some time later the timer ticks again... resulting in virtually endless repaint of treeView (in main thread as all GUI opperations). This prevents for example load/save that is memory expensive. (saving loading data associated with millions of rows of data.)
Can someone suggest how to disable the fetchMoreTimer (of course in QAbstractItemViewPrivate :/)
I tried subclassing the timerEvent. But unfortunately about 6 different timers triggers the event and I don't know how to find out which I can ignore.
Some suggestions?
In Qt, I have a model subclassing QAbstractItemModel - it's a tree displayed in a QTreeView.
The model supports various forms of change which all work OK. The two of relevance are:
1) Some data in a small number of related rows changes
2) A visualisation change means that the majority of rows should change their formatting - in particular they have a change of background highlighting. Their DisplayRole data does not change.
The current design deals with both of these in the same way: for every row that has any change the model emits dataChanged(start_of_row_index,end_of_row_index). I emit the signal for both parent rows that change and for any of their children that have changed.
However, this performs badly in case 2 as the model gets big: a very large number of dataChanged signals are emitted.
I have changed the code so that in case 2 the model emits dataChanged only for the (single) row that is the parent of the entire tree.
This still appears to work correctly but does not accord with my understanding of the responsibilities of the model. But I suspect I may be wrong.
Perhaps I am misunderstanding the dataChanged signal? Does it actually cause the view to update all children as well as the specified range? Or can I avoid emitting dataChanged when it is not the DisplayRole that is changing?
Edited with my progress so far
As Jan points out, I ought to emit dataChanged either for most or all of the rows in case 2.
My code originally did this by emitting dataChanged for every changed row but this is too expensive - the view takes too long to process all these signals.
A possible solution could be to aggregate the dataChanged signal for any contiguous blocks of changed rows but this will still not perform well when, for example, every other row has changed - it would still emit too many signals.
Ideally I would like to just tell the view to consider all data as potentially changed (but all indexes still valid - the layout unchanged). This does not seem to be possible with a single signal.
Because of a quirk of the QTreeView class, it is possible (though incorrect according to the spec) to emit only one dataChanged(tl,br) as long as tl != br. I had this working and it passed our testing but left me nervous.
I have settled for now on a version which traverses the tree and emits a single dataChanged(tl,br) for every parent (with tl,br spanning all the children of that parent). This conforms to the model/view protocol and for our models it typically reduces the number of signals by about a factor of 10.
It does not seem ideal however. Any other suggestions anyone?
You are expected to let your views know whenever any data gets changed. This "letting know" can happen through multiple ways; emitting dataChanged is the most common one when the structure of the indexes has not changed; others are the "serious" ones like modelReset or layoutChanged. By a coincidence, some of the Qt's views are able to pick up changes even without dataChanged on e.g. a mouseover, but you aren't supposed to rely on that. It's an implementation detail and a subject to change.
To answer the final bit of your question, yes, dataChanged must be emitted whenever any data returned from the QAIM::data() changes, even if it's "just" some other role than Qt::DisplayRole.
You're citing performance problems. What are the hard numbers -- are you actually getting any measurable slowdown, or are you just prematurely worried that this might be a problem later on? Are you aware of the fact that you can use both arguments to the dataChanged to signal a change over a big matrix of indexes?
EDIT:
A couple more things to try:
Make sure that your view does not request extra data. For example, unless you set the QTreeView's uniformRowHeights (IIRC), the view will have to execute O(n) calls for each dataChanged signal, leading to O(n^2) complexity. That's bad.
If you are really sure that there's no way around this, you might get away by combining the layoutAboutToBeChanged, updatePersistentIndexes and layoutChanged. As you are not actually changing the structure of your indexes, this might be rather cheap. However, the optimization opportunity in the previous point is still worthwhile taking.
I have a QtreeView that is displaying file lists (using a model derived from QFileSystemModel). As the building of the file list needs a lot of time (I must read the content of each file to determine if the file is visible or not) I want to display the wait cursor during the analysis process. The wait cursor must starts when the user select an item (directory), and stays as long as all the list is not displayed.
For this I did a lot of tries:
using the expanded signal. But this signal is not related to drawing. Hence it arrves almost immediately,
managing the cursor in data() function of my model. But in this case I have an horrible blinking cursor,
managing the cursor by overridding the painEvent. In this case I have a small blinking, and the cursor appears lately
...
So, none of my "solutions" is perfect. Hence, do you have a way to do what I want?
Thanks a lot.
One more idea, but i did't try it:
Try to check QAbstractItemView::State in a timer after QTreeView::expanded() signal.
I've messed up my views a bit (big surprise for CC) and I have a child stream that has most of what I need, but not all. My parent stream needs to be updated, but I can't because of some issues (maybe evil twins I dunno).
Is it possible/wise to do the following
1) clear all elements in the parent stream
2) use clearfsimport to perform mass update on child stream
3) deliver child stream to parent
This is of course dependent on the fact that child stream elements are not deleted when deleted from parent.
Should I just clear out all elements of both views and start over? Any suggestions would be appreciated.
Yes, you can do a clearfsimport from whatever source you want to the child stream.
But I wouldn't recommend "clearing" (as in "rmnam'ing") all elements from the parent stream, even though it doesn't rmname them in the child stream, as my answer to your previous question details.
If you have a valid source (ie some directory with every file you need), you can clearfsimport it to your child stream view, in order to be complete.
Then try the deliver and identify the potential evil twins: your deliver will stop quickly at the "directory merge" stage, asking you to choose between two (identically named) files: you will chose the one coming from the child stream.
All the other files present in both stream will see their history updated as expected by that deliver.
I am attempting to use DirectShow to play two AVI files consecutively (one after the other) so that there is no interruption in the audio or video when the player transitions from one file to the next.
I have two custom controls on my form. Each one is pre-loaded with an AVI file, and before playback begins I set up all the DirectShow interfaces, set the video windows and resize them, call IMediaControl.Run(), then IMediaControl.Pause(), then IMediaSeeking.SetPositions to reset to frame 0, on both controls. On the form, you can see that both files are paused at their initial frames.
I then call IMediaControl.Run() on the first control, and wait for it to complete before calling Run() on the second control. Initially, I hooked into the first video's EC_COMPLETE notification message, and used this to start the second. Thinking that this event might be slow to arrive (turns out it is, but for a weird reason), I tried two other approaches:
Check the first video's current position inside a timer that goes off every second or so (using IMediaPosition.get_CurrentPosition). When the current position is within a second of the video's stop time (known in advance from IMediaPosition.get_StopTime), I go into a tight while loop and wait for the current position to equal the stop time, and then call Run() on the second video.
Same as the first, except I replace the while loop with a call to timeSetEvent from winmm.dll, with a delay set so that it fires right when the first file is supposed to end. I use the callback to Run() the second file.
Either of these two methods substantially cuts down the delay between the end of the first file and the beginning of the second, indicating that the EC_COMPLETE message doesn't arrive immediately after the file is complete (I also tried hooking the EC_SEGMENT_COMPLETE message, which is supposed to be used for looping within a file, but apparently nobody supports this - it never occurs on my machine, at least).
Doing all of the above has cut the transition delay from as much as a second, down to a barely perceptible glitch; about a third of the time the files transition with no interruption at all, which suggests there's no fundamental reason I can't get this to work all the time.
The slight delay is still unacceptable, unfortunately. I assume (and I could easily be wrong) that the remaining delay is due to a slight variable delay between the call to IMediaControl.Run() and when the video actually starts playing.
Does anybody know anything I can do to eliminate this little lag? It would also help to be told this is fundamentally impossible for whatever reason, which wouldn't surprise me. I've never encountered a video player in Windows that doesn't have this problem, so it may not be doable.
More info: the AVI files I'm playing are completely uncompressed (video and audio are uncompressed), so I don't think the lag is due to DirectShow's having to uncompress the video ahead of play start, although it may still buffer ahead as matter of course (and this may be the source of the problem). I would have though that starting play, pausing and then rewinding to the beginning would fix this.
Also, the way I'm handling the transition is to actually have the second control underneath the first; when the first completes playing, I start the second and then call BringToFront on it, creating the appearance of a single video transitioning between the two originals. I don't think the glitch is due to this, because it works perfectly some of the time, and even if this were problematic, it wouldn't explain the matching audio glitch.
Even more: I just tried starting the second video 30-50 milliseconds "early" and that seemed to eliminate even more of the gap, so I'm guessing that the lag in Run() is about that long. It appears to be variable, though, so this is still not where I need it to be.
Still more: perhaps I could eliminate this delay by loading the AVIs from memory rather than from a file. Unfortunately, I have no idea how to do this. IMediaControl only has a RenderFile() method, not something like a RenderStream or RenderMemory method.
If you call IMediaControl::Run on a stopped graph, the graph manager will post the call to a worker thread (so there's some variability). On the worker thread, the graph will be paused. Render filters only complete a pause transition once they have received data, so once GetState() returns S_OK, the graph manager knows that the graph is fully cued. At this point, it picks a time roughly 10ms into the future, and calls Run on each filter with that time as the start point. Since it takes time to tell each filter to Run, the dshow Run method has a parameter which is the refclock time at which a sample timestamped zero should be played -- i.e. the time at which the actual transition to run mode should take place.
To synchronise this with another graph, you first have to ensure that both graphs have the same clock. Query the graph (not the filter) for IMediaFilter, and call GetSyncSource on one graph and SetSyncSource on the other. Then you need to pause the second graph, so that it is cued and ready. When you want to start it, call IMediaFilter::Run instead of IMediaControl::Run, and you can pass your own start time. This still has to be a few milliseconds into the future, so the best thing might be to set the start time of the second graph to be the first graph's start time plus its duration (for an indexed container of uncompressed streams, the duration should be accurate).
Another approach is to use multiple graphs. Separating source from rendering would allow you to switch seamlessly between sources since they feed into a common render graph. There is sample source code for this approach at www.gdcl.co.uk/gmfbridge.
G