BlazeDS slow with large number of objects - apache-flex

I'm developing a mobile app using Flex and I have run into some problems using BlazeDS. Some users request a (relatively) pretty large amount of data from my server, which returns in about 2 seconds. The data consists of some pretty simple objects (Client, which has a name/phone/email, and a few other properties, some of which are other nested objects with more properties). The largest requests consist of no more than about 10,000 of these objects, which is only a few MB in size. The problem I am running into is that as soon as the server sends its response, the mobile screen locks up while the data is being processed. For 10,000 objects, this can take several minutes and sometimes even crash the device, and at best leave the user with a frozen screen the entire time. For the average user, it is at least 2-5 seconds of frozen screen. This is not only an issue for devices with limited capabilities. This also happens on my PC (i5 processor, 8GB RAM). From what I can tell, this downtime is taking place somewhere between when the device receives the response and when I can access the data. Setting a breakpoint on the first line of the following RemoteObject result handler has the screen lock up BEFORE it reaches the breakpoint:
protected function myResultHandler(event:ResultEvent):void
{
var result:ArrayCollection = event.result as ArrayCollection;
//Do other stuff here
}
I know very little about BlazeDS and AMF, so my only guess is that the freezing happens while the objects are being created on the device. Is there any way to speed up this process at all? Should I normally expect to see really poor performance like this? Any help would be greatly appreciated.

After a couple of hours digging around, I found the solution to my problem: On the server side, the objects I was sending had a ton of extraneous properties unrelated to the information I needed on the mobile app. In addition, there were helper methods on those classes in the form getMyHelper() which would attempt to generate a property on the Flex side. This resulted in a huge list of reference errors being thrown during the download since no properties with those names existed in the AS objects. I created stripped down 'lite' versions of the objects I needed sent across with no extra properties or methods. The massive lists now display nearly instantaneously after receiving the response from the server.

Related

Performance Issue with JavaFX multiple tabs simultaneous updates of TextArea

I'm relatively new to JavaFX and have written a small applet which launches a number of (typically between 3 and 10) sub-processes. Each process has a dedicated tab displaying current status and a large TextArea where the process output is appended to. For simplicity all tabs are generated on startup.
javafx.application.Platform.runLater(() -> logTextArea.appendText(line)))
The applet works fine when workloads on sub-processes are low-moderate (not many logs), but starts to freeze when sub-processes are heavily used and generate a decent amount of logging output (a good few hundreds of lines per second in total).
I looked into binding the TextArea to the output, but my understanding is it effectively calls the Platform.runLater() method so there will still be hundreds of calls to JavaFX application thread per second.
Batching logging outputs isn't an ideal solution either because I'd like to keep the displayed log as real-time as possible.
The only solution which I think might solve the problem seems to be dynamic loading of individual tabs. This would definitely prevent unnecessary calls to update logging textareas that aren't currently visible, but before I go ahead to make the changes, I'd like to get some helpful advice from you here. Thanks!
Thanks for all your suggestions. Finally got around to implementing a fix today.
The issue was fixed by using a buffer coupled with a secondary check for time lapse (maximum 20 lines or 100 ms).
In addition, I also implemented rolling output to limit the total process output to 1,000 lines.
Thanks again for your invaluable contribution!

Out of memoryexception in SignalR due to a large real time data

I'm using SignalR for a real time data ticker with up to 10k rows contained in a single object being sent to the client per sec. The memory of IIS worker processor goes on increasing until the ticking finally freezes.
First of all, as you already could read SignalR is built on a premise that you will not send large messages. However, in a real word it is a perfectly valid scenario.
So, what's the deal with a memory issues. SignalR has a circular buffer with a default size of 1000 elements, it stores every single message into that buffer per opened connection. So basically, if you have 100 opened connection, and you've sent 1000 messages it will be total of 100*1000 messages stored inside your memory.
Another thing you should take into consideration is .Net framework's large objects heap and garbage collection. Every object with a size greater than 85kB will went to the large object heap, next thing I should point out that garbage collector treats objects inside large object heap as a second generation objects. Taken that into account, you could realize that your objects once they have been dereferenced from the SignalR's circular buffer won't be garbage collected immediatelly due to their size.
As #davidfowl said, you really could make your data smaller, but sometimes you will not be able to do it, without introducing some pretty complex mechanic both on client and on server.
Fortunatelly, there is a way to reduce default size of SignalR's circular buffer, and you could do it by setting:
GlobalHost.Configuration.DefaultMessageBufferSize = 32

Different approaches on getting captured video frames in DirectShow

I was using a callback mechanism to grab the webcam frames in my media application. It worked, but was slow due to certain additional buffer functions that were performed within the callback itself.
Now I am trying the other way to get frames. That is, call a method and grab the frame (instead of callback). I used a sample in CodeProject which makes use of IVMRWindowlessControl9::GetCurrentImage.
I encountered the following issues.
In a Microsoft webcam, the Preview didn't render (only black screen) on Windows 7. But the same camera rendered Preview on XP.
Here my doubt is, will the VMR specific functionalities be dependent on camera drivers on different platforms? Otherwise, how could this difference happen?
Wherever the sample application worked, I observed that the biBitCount member of the resulting BITMAPINFOHEADER structure is 32.
Is this a value set by application or a driver setting for VMR operations? How is this configured?
Finally, which is the best method to grab the webcam frames? A callback approach? Or a Direct approach?
Thanks in advance,
IVMRWindowlessControl9::GetCurrentImage is intended for occasional snapshots, not for regular image grabbing.
Quote from MSDN:
This method can be called at any time, no matter what state the filter
is in, whether running, stopped or paused. However, frequent calls to
this method will degrade video playback performance.
This methods reads back from video memory which is slow in first place. This methods does conversion (that is, slow again) to RGB color space because this format is most suitable for for non-streaming apps and gives less compatibility issues.
All in all, you can use it for periodic image grabbing, however this is not what you are supposed to do. To capture at streaming rate you need you use a filter in the pipeline, or Sample Grabber with callback.

How do I throttle a CPU intensive loop in Adobe Flex?

I have a method, which connects to a HTTP server and requests via XMLRPC, a list of data structures and then for each data structure gets a list of attributes and the values of those attributes. It's implemented using nested for each loops.
The problem is that it's loading a lot of data all at once, and consuming a massive amount of CPU (over 100%) reading responses from the server and parsing the XML.
If I were writing the program in C, I'd insert a usleep() at the end of the loop, to wait before trying to load more data and reduce CPU usage. What would the equivalent be in Flex?
One of the biggest drawbacks of flash/flex is that the generated application is single threaded, so running CPU intensive tasks like parsing large responses will make the application freeze.
Some of the solutions I use to work around those problems are:
If possible do not load everything from the server at once but load it through multiple calls (i.e read your data using 10 pages of 50 results instead of 500 at once).
Make sure that the data you are loading is not directly binded on some UI elements (changes in the data will trigger change on the UI that will consume more CPU)
Try to simulate a thread-like model by using a Timer object, and work on a subset of your data at each timer tick.
Also returning XML results is not the most efficient way (using RemoteObjects through BlazeDs is more efficient as it uses binary streams instead of strings).
In some scenarios, it's better to stick with what you have access to if you're not in control of your server-side data source. Another alternative would be to write getter/setter wrapper to your XML object. XML is first class citizen in AS3. Easy for programs to read (e4x) but annoying for us devs to write and modify. If you can, get a page of data that once received will kick off another request while the user can work with the data from pages that have already been loaded. Create the illusion of parallelism using serial techniques.

Asp.net guaranteed response time

Does anybody have any hints as to how to approach writing an ASP.net app that needs to have a guaranteed response time?
When under high load that would normally cause us to exceed our desired response time, we want to throw out an appropriate number of requests, so that the rest of the requests can return before the max response time. Throwing out requests based on exceeding a fixed req/s is not viable, as there are other external factors that will control response time that cause the max rps we can safely support to fiarly drastically drift and fluctuate over time.
Its ok if a few requests take a little too long, but we'd like the great majority of them to meet the required response time window. We want to "throw out" the minimal or near minimal number of requests so that we can process the rest of the requests in the allotted response time.
It should account for ASP.Net queuing time, ideally the network request time but that is less important.
We'd also love to do adaptive work, like make a db call if we have plenty of time, but do some computations if we're shorter on time.
Thanks!
SLAs with a guaranteed response time require a bit of work.
First off you need to spend a lot of time profiling your application. You want to understand exactly how it behaves under various load scenarios: light, medium, heavy, crushing.. When doing this profiling step it is going to be critical that it's done on the exact same hardware / software configuration that production uses. Results from one set of hardware have no bearing on results from an even slightly different set of hardware. This isn't just about the servers either; I'm talking routers, switches, cable lengths, hard drives (make/model), everything. Even BIOS revisions on the machines, RAID controllers and any other device in the loop.
While profiling make sure the types of work loads represent an actual slice of what you are going to see. Obviously there are certain load mixes which will execute faster than others.
I'm not entirely sure what you mean by "throw out an appropriate number of requests". That sounds like you want to drop those requests... which sounds wrong on a number of levels. Doing this usually kills an SLA as being an "outage".
Next, you are going to have to actively monitor your servers for load. If load levels get within a certain percentage of your max then you need to add more hardware to increase capacity.
Another thing, monitoring result times internally is only part of it. You'll need to monitor them from various external locations as well depending on where your clients are.
And that's just about your application. There are other forces at work such as your connection to the Internet. You will need multiple providers with active failover in case one goes down... Or, if possible, go with a solid cloud provider.
Yes, in the last mvcConf one of the speakers compares the performance of various view engines for ASP.NET MVC. I think it was Steven Smith's presentation that did the comparison, but I'm not 100% sure.
You have to keep in mind, however, that ASP.NET will really only play a very minor role in the performance of your app; DB is likely to be your biggest bottle neck.
Hope the video helps.

Resources