Draw current frame during seek operations - directshow

I'm suing IMediaSeeking::SetPositions to set video to some frame. But if video playback is paused, but sometimes, if I'm doing many SetPositions one after another are not redrawing untill I start playback again. I tried using IVMRWindowlessControl9::RepaintVideo after SetPositions but frame remained unchanged.
Is there any way to repaint current frame on pause \ during seeking in VMR9?

In standard pipeline there is no entity on the filter graph to keep a last good video frame for redrawing purposes. Seeking involves flushing the remains on the line, then preloading it with fresh data from new streaming point.
If you want to provide the video renderer with a sort of a banner to be displayed while seek operation is in progress, the way I would do it is putting an extra custom filter onto video leg of the pipeline close to video renderer. The filter is to be in charge of keeping a copy of last displayed frame and it would be capable of delivering this data downstream to video renderer on seek operation before it received a valid frame from the upstream connection.
A handy copy of last displayed frame might be suitable in other scenarios as well since the filter can redeliver the data on request any time the application might need this. For instance, this can be used when VMR's mixer bitmap is updated by the application, and VMR expects next master video frame to come to visualize the bitmap update. The filter can force the update by delivering the copy of what it holds.

Related

Updating JavaFX with large data sets parsed from data on the disk

I have a tableview that contains a list of objects, which pull metadata from files on the harddrive.
The list can easily be 5000 items long, and has to be updated any time the hard-drive data changes. It can also be updated if the user provides an additional directory to build the dataset from. In short, the data on the tableview is liable to updated regularly.
I currently have a single thread that handles all updates. It uses a combination of a few different techniques (persistence between sessions, a FileWalker to add new data, and WatchService to watch for updates during a session). Unfortunately, this thread is not the JavaFX thread, so when I update the ObservableList backing the TableView, I can occasionally get a ConcurrentModificationException.
The basic shape of the background daemon thread a while ( true ) loop, because it polls the various places updates can come from, and reacts accordingly.
Platform.runLater() does not work seem to work for this large dataset. I tried having the background thread keep a list of objects to add and remove, and then requesting that the javafx thread update the ObservableList by reading the addMe and removeMe lists, but that didn't work -- the runLater() threads stopped being called during a large read of data, and so the tableview wouldn't update for 6+ minutes until all data was done being read by the background thread. Only after that would the runLater() finally be called. I avoided calling unncessary runLater()'s by checking to see if one update request was already pending, but pruning the number of requests didn't have any impact on the result.
Is there any way to have a constantly running thread like this in the background, and have it update the dataset for a JavaFX UI? I feel like there must be some way to do this, but I can't find anything that works.

Qt5 QTreeView with custom model and large data very slow scrolling

I have custom data that I need to display in a QTreeView. I have derived my model from QAbstractTableModel, and made my own implementations of rowCount(), columnCount(), data(), and headerData(). The model has a local QList> to support it, and the data() function is defined to read the values out of that list of lists directly corresponding to the row and column received in the QModelIndex parameter. There are two issues I'm running into.
The first is that the load of a very large file is quite slow, which is understandable. The second is that the scroll action is painfully slow, which I am not really understanding. Turns out that if I pull the scroll handle down, the GUI hangs for about 20 seconds, and then pops back. If I pull the handle a greater distance down, the hang time increases accordingly. If I pull the handle all the way to the bottom of the scroll bar, after waiting for the application to become responsive again, I can pull the handle up and down and get much better response.
It seems to me that QTreeView is only asking for a small chunk of the available data, but when I have pulled the scroll handle all the way to the bottom of the scroll bar, once the application becomes responsive again, it has by that point read all the data.
Is there a way to program for a much more responsive experience with scrolling for large data? I don't mind a longer wait up front, so just something like forcing the view to read all data from the model up front would work.
I have also thought that I could go back to just deriving from QAbstractItemView and controlling how it requests and stores data, only allowing for storing the viewed data, plus a buffer of entries before and after the viewed data. That of course would mean I'd have to control the scroll bar, since the handle sizing would indicate a small amount of data, and I would like it to look to the user as it should for the size of data they are dealing with. Not really wanting to go there if I don't have to.
Two things:
Re-implement fetchMore() and canFetchMore() in your model. See this implementation example. Basically, the two functions allow lazy initialization of your data and should stop ui freezes.
Replace your usage of reset() and dataChanged() to use the insert and remove functionality. Right now, you are forcing the view to recalc which of 100,000 items to show.
use
treeview view;
view.setUniformRowHeights(true);
Then view do n't hangs.

Huge data In DyGraph

Using Dygraphs, I created a line chart for Temperature vs Time. In my database I have apprx 700k of records and it may keep on increasing on minute ticks. I can plot all the 700k of records in chart. The issue is fetching all those at once is killing time and it took apprx 15mins each time I refresh the page. This bombs out in realtime.
Is there a better way to handle millions of records with out any data size restriction? I did everything I can do. Is there any alternate library to handle such stuff?
I ran into the same issue. My solution is this great dygraphs extension:
http://kaliatech.github.io/dygraphs-dynamiczooming-example/
It works great with giant amounts of data (in my case about 10 to 15 million data rows). You only need to write some server-side code for fetching the aggregated data from the DB. In my case, I created a stored procedure in the DB itself that takes the start date, end date, some other ID's for datapoint identification and - most importantly - the period by which the values get grouped and aggregated. The data is requested and sent back via ajax almost instantly, so the frontend user gets a very smooth and responsive UI.
I aggregate the values in such a way that the number of datetime/value-pairs is equal to the width of the chart area in pixels, a finer resolution wouldn't be visible anyway. By doing so, you can minimize the amount of data that gets send "through the wire".
Amongst all JavaScript charting libraries, dygraphs is generally considered to handle large data sets well. If it's too slow for your needs, you should try downsampling your data or reducing the range of data that you show.
I have a charting app using Dygraphs routinely displaying well over 100,000 records and the only delay I can speak of is loading the data through the internet connection. Once the data is loaded, manipulating it (windowing, changing the averaging) is virtually instant, and I use a Core2Duo CPU, not really state of the art.
I suggest you look at your download bandwidth.
Whatever the bottleneck, reducing your data set before sending it through the wire will help with everything, except fine detail analysis.

Generating map between frame number and frame sample time using SampleGrabber filter in directshow

In one of my application I need to know the map between frame position (frame number) and actual frame sample time for a given video file.
I'm using Directshow SampleGrabber filter in callback mode. I'm overriding BufferCB method of ISampleGraberCB class, whenever the callback is called, I'm mapping the arrived sampletime to frame position in a map. Frame position is incremented whenever a new sample arrives starting from zero.
Though I'm able to generate the required map, the above approach is very slow when it comes to handle large video files.
Can someone provide any suggestion on how to quickly generate this map or any other better approach.
Thanks in advance.
Pradeep
There are basically no such thing as "frame number" in DirectShow, only time stamps. The only thing to do the needed is to go through the entire file and record timestamps, as you already do.
However, the process might be way faster if you set the sample grabber to receive raw/undecoded fames. This way there is no need for decoder and the whole iteration through frames happens pretty quick. Don't forget to remove clock from the graph to request ASAP processing (as opposed to default real time pace).

Flex web application gets progressively slower and freezes

I have a Flex web application where I am visualizing data (for different countries) in the form of charts. The data is in the form of CSV files. There are individual files for individual charts i.e. one file has all data pertaining to one chart for all countries.
I have a left navigation menu that allows one to see data on a country by country basis. As I view more and more countries, the web application becomes progressively slower till it freezes completely. The problem goes away if I refresh the browser and empty the cache.
I am using the URLLoader class in flex to read the CSV data into a string and then I am parsing the string to generate the charts.
I realize this is happening because more and more data is somehow accumulating in the browser. Is there any way in Flex to rectify this? Any pointers/help would be appreciated.
Thanks
- Vinayak
Like #OXMO456 said before my, I would use the profiler to check this issue.
to refine my answer I would also say please make sure that you are following all of the rules for low memory in flex like
1. clearing out (removing) event listeners
2. nulling out static variables
and more like so.
I would use the "snapshot" feature of the profiler and see what is happening in minute 1 and then minute 2, the difference between the two of these is probably the source of your leak.

Resources