Generating map between frame number and frame sample time using SampleGrabber filter in directshow - directshow

In one of my application I need to know the map between frame position (frame number) and actual frame sample time for a given video file.
I'm using Directshow SampleGrabber filter in callback mode. I'm overriding BufferCB method of ISampleGraberCB class, whenever the callback is called, I'm mapping the arrived sampletime to frame position in a map. Frame position is incremented whenever a new sample arrives starting from zero.
Though I'm able to generate the required map, the above approach is very slow when it comes to handle large video files.
Can someone provide any suggestion on how to quickly generate this map or any other better approach.
Thanks in advance.
Pradeep

There are basically no such thing as "frame number" in DirectShow, only time stamps. The only thing to do the needed is to go through the entire file and record timestamps, as you already do.
However, the process might be way faster if you set the sample grabber to receive raw/undecoded fames. This way there is no need for decoder and the whole iteration through frames happens pretty quick. Don't forget to remove clock from the graph to request ASAP processing (as opposed to default real time pace).

Related

AnalysisServices: Cannot query internal supporting structures for column because they are not processed. Please refresh or recalculate the table

I'm getting the following error when trying to connect Power BI to my tabular model in AS:
AnalysisServices: Cannot query internal supporting structures for column 'table'[column] because they are not processed. Please refresh or recalculate the table 'table'
It is not a calculated column and the connection seems to work fine on the local copy. I would appreciate any help with this!
This would depend on how you are processing the data within your model. If you have just done a Process Data, then the accompanying meta objects such as relationships have not yet been built.
Every column of data that you load needs to also be processed in this way regardless of whether it is a calculated column or not.
This can be achieved by running a Process Recalc on the Database or by loading your tables or table partitions with a Process Full/Process Default rather than just a Process Data, which automatically runs the Process Recalc once the data is loaded.
If you have a lot of calculated columns and tables that result in a Process Recalc taking a long time, you will need to factor this in to your refreshes and model design.
If you run a Process Recalc on your database or a Process Full/Process Default on your table now, you will no longer have those errors in Power BI.
More in depth discussion on this can be found here: http://bifuture.blogspot.com/2017/02/ssas-processing-tabular-model.html

Shinobi charts recreating every time

I am using shinobicharts ChartFragment. I have scenario that each time I have to load the chart with different data within same activity i.e I am not recreating that activity. But I am failed to do this. It not recreating and not clearing the series which I previously set to that chart.
Can anybody help, how can I remove old series in that chart and load new series each time? I tried shinobiChart().removeSeries() but it didn't work.
Thanks
If you wish to dynamically load new data, you do not necessarily need to remove the series. Instead you can simply add data points to your data adapter. You will find the following methods useful:
https://www.shinobicontrols.com/docs/ShinobiControls/ShinobiChartsAndroid/1.7.2/Premium/Normal/apidocs/docs/reference/com/shinobicontrols/charts/DataAdapter.html#add(int, com.shinobicontrols.charts.Data)
https://www.shinobicontrols.com/docs/ShinobiControls/ShinobiChartsAndroid/1.7.2/Premium/Normal/apidocs/docs/reference/com/shinobicontrols/charts/DataAdapter.html#add(com.shinobicontrols.charts.Data)
https://www.shinobicontrols.com/docs/ShinobiControls/ShinobiChartsAndroid/1.7.2/Premium/Normal/apidocs/docs/reference/com/shinobicontrols/charts/DataAdapter.html#addAll(int, java.util.Collection>)
https://www.shinobicontrols.com/docs/ShinobiControls/ShinobiChartsAndroid/1.7.2/Premium/Normal/apidocs/docs/reference/com/shinobicontrols/charts/DataAdapter.html#addAll(java.util.Collection>)
In these api docs you will also find similar methods to remove data points.
One thing to bear in mind is that adding or removing a data point to / from a SimpleDataAdapter class instance which is set on a series will trigger a redraw of that series. If you are working with large numbers of data points, this might not be performant. In this case a more suitable approach might be to temporarily remove the data adapter from the series, perform the modification to the data and then re add the data adapter back to the series. Alternatively you might like to implement your own DataAdapter and control when you instruct the chart to redraw that series (via the fireUpdateHandler method).
You can of course remove the series itself and add a new one, but this approach is potentially inefficient. That said, if you wish to remove a series from a chart you need to use the following method:
https://www.shinobicontrols.com/docs/ShinobiControls/ShinobiChartsAndroid/1.7.2/Premium/Normal/apidocs/docs/reference/com/shinobicontrols/charts/ShinobiChart.html#removeSeries(com.shinobicontrols.charts.Series)
In order for this method to be successful you will first need to obtain a reference to the correct series, which you must pass to this method as a parameter.
I hope that you find this information useful. If you need any further help please if possible post any relevant code, such as that which you use to create your fragments and set up your chart.
Thanks and kind regards,
Kai.
Disclaimer - I work for ShinobiControls.

Huge data In DyGraph

Using Dygraphs, I created a line chart for Temperature vs Time. In my database I have apprx 700k of records and it may keep on increasing on minute ticks. I can plot all the 700k of records in chart. The issue is fetching all those at once is killing time and it took apprx 15mins each time I refresh the page. This bombs out in realtime.
Is there a better way to handle millions of records with out any data size restriction? I did everything I can do. Is there any alternate library to handle such stuff?
I ran into the same issue. My solution is this great dygraphs extension:
http://kaliatech.github.io/dygraphs-dynamiczooming-example/
It works great with giant amounts of data (in my case about 10 to 15 million data rows). You only need to write some server-side code for fetching the aggregated data from the DB. In my case, I created a stored procedure in the DB itself that takes the start date, end date, some other ID's for datapoint identification and - most importantly - the period by which the values get grouped and aggregated. The data is requested and sent back via ajax almost instantly, so the frontend user gets a very smooth and responsive UI.
I aggregate the values in such a way that the number of datetime/value-pairs is equal to the width of the chart area in pixels, a finer resolution wouldn't be visible anyway. By doing so, you can minimize the amount of data that gets send "through the wire".
Amongst all JavaScript charting libraries, dygraphs is generally considered to handle large data sets well. If it's too slow for your needs, you should try downsampling your data or reducing the range of data that you show.
I have a charting app using Dygraphs routinely displaying well over 100,000 records and the only delay I can speak of is loading the data through the internet connection. Once the data is loaded, manipulating it (windowing, changing the averaging) is virtually instant, and I use a Core2Duo CPU, not really state of the art.
I suggest you look at your download bandwidth.
Whatever the bottleneck, reducing your data set before sending it through the wire will help with everything, except fine detail analysis.

Draw current frame during seek operations

I'm suing IMediaSeeking::SetPositions to set video to some frame. But if video playback is paused, but sometimes, if I'm doing many SetPositions one after another are not redrawing untill I start playback again. I tried using IVMRWindowlessControl9::RepaintVideo after SetPositions but frame remained unchanged.
Is there any way to repaint current frame on pause \ during seeking in VMR9?
In standard pipeline there is no entity on the filter graph to keep a last good video frame for redrawing purposes. Seeking involves flushing the remains on the line, then preloading it with fresh data from new streaming point.
If you want to provide the video renderer with a sort of a banner to be displayed while seek operation is in progress, the way I would do it is putting an extra custom filter onto video leg of the pipeline close to video renderer. The filter is to be in charge of keeping a copy of last displayed frame and it would be capable of delivering this data downstream to video renderer on seek operation before it received a valid frame from the upstream connection.
A handy copy of last displayed frame might be suitable in other scenarios as well since the filter can redeliver the data on request any time the application might need this. For instance, this can be used when VMR's mixer bitmap is updated by the application, and VMR expects next master video frame to come to visualize the bitmap update. The filter can force the update by delivering the copy of what it holds.

Autocomplete optimization for large data sets

I am working on a large project where I have to present efficient way for a user to enter data into a form.
Three of the fields of that form require a value from a subset of a common data source (SQL Table). I used JQuery and JQuery UI to build an autocomplete, which posts to a generic HttpHandler.
Internally the handler uses Linq-to-sql to grab the data required from that specific table. The table has about 10 different columns, and the linq expression uses the SqlMethods.Like() to match the single search term on each of those 10 fields.
The problem is that that table contains some 20K rows. The autocomplete works flawlessly, accept the sheer volume of data introduces deleays, in the vicinity of 6 seconds or so (when debugging on my local machine) before it shows up.
The JqueryUI autocomplete has 0 delay, queries on the 3 key, and the result of the post is made in a Facebook style multi-row selectable options. (I almost had to rewrite the autocomplete plugin...).
So the problem is data vs. speed. Any thoughts on how to speed this up? The only two thoughts I had were to cache the data (How/Where?); or use straight up sql data reader for data access?
Any ideas would be greatly appreciated!
Thanks,
<bleepzter/>
I would look at only returning the first X number of rows using the .Take(10) linq method. That should translate into a sensbile sql call, which will put much less load on your database. As the user types they will find less and less matches, so they will only see that data they require.
I'm normally reckon 10 items is enough for the user to understand what is going on and still get to the data they need quickly (see the amazon.com search bar for an example).
Obviously if you can sort the data in a meaningful fashion then the 10 results will be much more likely to give the user what they are after quickly.
Returning the top N results is a good idea for sure. We found (querying a potential list of 270K) that returning the top 30 is a better bet for the user finding what they're looking for, but that COMPLETELY depends on the data you are querying.
Also, you REALLY should drop the delay to something sensible like 100-300 ms. When you set delay to ZERO, once you hit the 3-character trigger, effectively EVERY. SINGLE. KEY. STROKE. is sent as a new query to your server. This could easily have the unintended and unwelcome effect of slowing down the response even MORE.

Resources