How do I throttle a CPU intensive loop in Adobe Flex? - apache-flex

I have a method, which connects to a HTTP server and requests via XMLRPC, a list of data structures and then for each data structure gets a list of attributes and the values of those attributes. It's implemented using nested for each loops.
The problem is that it's loading a lot of data all at once, and consuming a massive amount of CPU (over 100%) reading responses from the server and parsing the XML.
If I were writing the program in C, I'd insert a usleep() at the end of the loop, to wait before trying to load more data and reduce CPU usage. What would the equivalent be in Flex?

One of the biggest drawbacks of flash/flex is that the generated application is single threaded, so running CPU intensive tasks like parsing large responses will make the application freeze.
Some of the solutions I use to work around those problems are:
If possible do not load everything from the server at once but load it through multiple calls (i.e read your data using 10 pages of 50 results instead of 500 at once).
Make sure that the data you are loading is not directly binded on some UI elements (changes in the data will trigger change on the UI that will consume more CPU)
Try to simulate a thread-like model by using a Timer object, and work on a subset of your data at each timer tick.
Also returning XML results is not the most efficient way (using RemoteObjects through BlazeDs is more efficient as it uses binary streams instead of strings).

In some scenarios, it's better to stick with what you have access to if you're not in control of your server-side data source. Another alternative would be to write getter/setter wrapper to your XML object. XML is first class citizen in AS3. Easy for programs to read (e4x) but annoying for us devs to write and modify. If you can, get a page of data that once received will kick off another request while the user can work with the data from pages that have already been loaded. Create the illusion of parallelism using serial techniques.

Related

MediaRecorder API chunks as an independent videos

I'm trying to build simple app that would stream video from camera using browser to the remote server.
For the camera access from browser I've found a wonderful WebRTC API: getUserMedia.
Now for the streaming it to the server IIUC the best way would be to use some of the WebRTC_API for transporting and then use some server side library to deal with it.
However, at first I went with a bit different approach:
I've user MediaRecorder based on the stream from camera. And then I was setting the timeslice for the MediaRecorder.start() to be few hundred Ms, e.g. 200. And I had some assumptions in wrt MediaRecorder which are not in sync with what I was observing:
I've observed weird behaviour(wrt to my assumptions about MediaRecorder):
If there was only 1 chunk uploaded to server -> it opens just fine.
If there are multiple chunks -> none of them loads correctly, they give errors: Could not determine type of stream. But then if I use ffmpeg to concat all the chunks - resulting file is fine. Same happens if I'm concatenating the blobs from MediaRecorder.ondataavailable on the client.
Thus the question:
Can the chunks in theory be independent video files? Or it is not what MediaRecorder was designed for? If it is not - then why do we even have the option to give timeslice parameter in its start() method?
Bonus question
If we're setting timeslice comparatively small, e.g. 10ms -> lots of data blobs that are sent to MediaRecorder.ondataavailable are of size 0. Where we can find some sort of guarantees/specs on the minimal timeslice that we can use, so that the data blobs are meaningful?
In the documentation there are the following:
If timeslice is not undefined, then once a minimum of timeslice milliseconds of data have been collected, or some minimum time slice imposed by the UA, whichever is greater, start gathering data into a new Blob blob, and queue a task, using the DOM manipulation task source, that fires a blob event named dataavailable at recorder with blob.
So, my guess is that it is somehow related to some data blobs being of 0 size. What does it "some minimum time slice imposed by the UA" mean?
PS
Happy to provide code if needed. But the question is not about some specific code. It is to get understanding of the assumptions behind the MediaRecorder API and why they are there.
The timeslice parameter does not allow to create independent media chunks; instead, it gives an opportunity to save data (e.g. on the filesystem, or uploaded to a server) on a regular basis, rather than holding potentially large media content in memory.

Is it OK for a DirectShow filter to seek the filters upstream from itself?

Normally seek commands are executed on a filter graph, get called on the renderers in the graph and calls are passed upstream by filters until a filter that can handle the seek does the actual seek operation.
Could an individual filter seek the upstream filters connected to one or more of its input pins in the same way without it affecting the downstream portion of the graph in unexpected ways? I wouldn't expect that there wouldn't be any graph state changes caused by calling IMediaSeeking.SetPositions upstream.
I'm assuming that all upstream filters are connected to the rest of the graph via this filter only.
Obviously the filter would need to be prepared to handle the resulting BeginFlush, EndFlush and NewSegment calls coming from upstream appropriately and distinguish samples that arrived before and after the seek operation. It would also need to set new sample times on its output samples so that the output samples had consistent sample presentation times. Any other issues?
It is perfectly feasible to do what you require. I used this approach to build video and audio mixer filters for a video editor. A full description of the code is available from the BBC White Papers 129 and 138 available from http://www.bbc.co.uk/rd
A rather ancient version of the code can be found on www.SourceForge.net if you search for AAFEditPack. The code is written in Delphi using DSPack to get access to the DirectShow headers. I did this because it makes it easier to handle com object lifetimes - by implementing smart pointers by default. It should be fairly straightforward to transfer the ideas to a C++ implementation if that is what you use.
The filters keep lists of the sub-graphs (a section of a graph but running in the same FilterGraph as the mixers). The filters implement a custom version of TBCPosPassThru which knows about the output pins of the sub-graph for each media clip. It handles passing on the seek commands to get each clip ready for replay when its point in the timeline is reached. The mixers handle the BeginFlush, EndFlush, NewSegment and EndOfStream calls for each sub-graph so they are kept happy. The editor uses only one FilterGraph that houses both video and audio graphs. Seeking commands are make by the graph on both the video and audio renderers and these commands are passed upstream to the mixers which implement them.
Sub-graphs that are not currently active are blocked by the mixer holding references to the samples they have delivered. This does not cause any problems for the FilterGraph because, as Roman R says, downstream filters only care about getting a consecutive stream of sample and do not know about what happens upstream.
Some key points you need to make sure of to avoid wasted debugging time are:
Your decoder filters need to be able to queue to the exact media frame or audio time. Not as easy to do as you might expect, especially with compressed formats such as mpeg2, which was designed for transmission and has no frame index in the files. If you do not do this, the filter may wait indefinitely to get a NewSegment call with the correct media times.
Your sub graphs need to present a NewSegment time equal to the value you asked for in your seek command before delivering samples. Some decoders may seek to the nearest key frame, which is a bit unhelpful and some are a bit arbitrary about the timings of their NewSegment and the following samples.
The start and stop times of each clip need to be within the duration of the file. Its probably not a good idea to police this in the DirectShow filter because you would probably want to construct a timeline without needing to run the filter first. I did this in the component that manages the FilterGraph.
If you want to add sections from the same source file consecutively in the timeline, and have effects that span the transition, you need to have two instances of the sub-graph for that file and if you have more than one transition for the same source file, your list needs to alternate the graphs for successive clips. This is because each sub graph should only play monotonically: calling lots of SetPosition calls would waste cpu cycles and would not work well with compressed files.
The filter's output pins define the entire seeking behaviour of the graph. The output sample time stamps (IMediaSample.SetTime) are implemented by the filter so you need to get them correct without any missing time stamps. and you can also set the MediaTime (IMediaSample.SetMediaTime) values if you like, although you have to be careful to get them correct or the graph may drop samples or stall.
Good luck with your development. If you need any more information please contact me through StackOverflow or DTSMedia.co.uk

BlazeDS slow with large number of objects

I'm developing a mobile app using Flex and I have run into some problems using BlazeDS. Some users request a (relatively) pretty large amount of data from my server, which returns in about 2 seconds. The data consists of some pretty simple objects (Client, which has a name/phone/email, and a few other properties, some of which are other nested objects with more properties). The largest requests consist of no more than about 10,000 of these objects, which is only a few MB in size. The problem I am running into is that as soon as the server sends its response, the mobile screen locks up while the data is being processed. For 10,000 objects, this can take several minutes and sometimes even crash the device, and at best leave the user with a frozen screen the entire time. For the average user, it is at least 2-5 seconds of frozen screen. This is not only an issue for devices with limited capabilities. This also happens on my PC (i5 processor, 8GB RAM). From what I can tell, this downtime is taking place somewhere between when the device receives the response and when I can access the data. Setting a breakpoint on the first line of the following RemoteObject result handler has the screen lock up BEFORE it reaches the breakpoint:
protected function myResultHandler(event:ResultEvent):void
{
var result:ArrayCollection = event.result as ArrayCollection;
//Do other stuff here
}
I know very little about BlazeDS and AMF, so my only guess is that the freezing happens while the objects are being created on the device. Is there any way to speed up this process at all? Should I normally expect to see really poor performance like this? Any help would be greatly appreciated.
After a couple of hours digging around, I found the solution to my problem: On the server side, the objects I was sending had a ton of extraneous properties unrelated to the information I needed on the mobile app. In addition, there were helper methods on those classes in the form getMyHelper() which would attempt to generate a property on the Flex side. This resulted in a huge list of reference errors being thrown during the download since no properties with those names existed in the AS objects. I created stripped down 'lite' versions of the objects I needed sent across with no extra properties or methods. The massive lists now display nearly instantaneously after receiving the response from the server.

Different approaches on getting captured video frames in DirectShow

I was using a callback mechanism to grab the webcam frames in my media application. It worked, but was slow due to certain additional buffer functions that were performed within the callback itself.
Now I am trying the other way to get frames. That is, call a method and grab the frame (instead of callback). I used a sample in CodeProject which makes use of IVMRWindowlessControl9::GetCurrentImage.
I encountered the following issues.
In a Microsoft webcam, the Preview didn't render (only black screen) on Windows 7. But the same camera rendered Preview on XP.
Here my doubt is, will the VMR specific functionalities be dependent on camera drivers on different platforms? Otherwise, how could this difference happen?
Wherever the sample application worked, I observed that the biBitCount member of the resulting BITMAPINFOHEADER structure is 32.
Is this a value set by application or a driver setting for VMR operations? How is this configured?
Finally, which is the best method to grab the webcam frames? A callback approach? Or a Direct approach?
Thanks in advance,
IVMRWindowlessControl9::GetCurrentImage is intended for occasional snapshots, not for regular image grabbing.
Quote from MSDN:
This method can be called at any time, no matter what state the filter
is in, whether running, stopped or paused. However, frequent calls to
this method will degrade video playback performance.
This methods reads back from video memory which is slow in first place. This methods does conversion (that is, slow again) to RGB color space because this format is most suitable for for non-streaming apps and gives less compatibility issues.
All in all, you can use it for periodic image grabbing, however this is not what you are supposed to do. To capture at streaming rate you need you use a filter in the pipeline, or Sample Grabber with callback.

Is there a way for different flex swf's to send large blocks of data between each other without localconnection?

Is there a way for different flex swf's to send large blocks of data between each other without using localconnection which is size limited or using sharedobjects. This needs to happen on the client without server communication.
How large are we talking about? Any client side solutions is limited by size for security purposes. You either use LocalConnection, SharedObjects or Javascript.
If it's too big, you might want to split up the object in chunks and send it over piece by piece. I've done this myself with LocalConnection and streaming data. Other than that, you could always have it done using user interactions (save a file, browse the file in the other Flex swf). Why are there 2 Flex swfs anyways? And why do they need to communicate large amounts of data between the two?
If the two swf's are both running in the same web page then you should be able to use the ExternalInterface classes with some JavaScript to forward the data from one swf to the other as there is no limit (I believe) on ExternalInterface calls.
Other than that your only choices are to use the LocalConnection or SharedObject methods. I have also had success with splitting large data into smaller chunks and used the local connection classes to send it piece by piece, if you do this then just watch out because each block of data sent will not necessarily be received in the order in which it was sent!

Resources